×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
My best guess, based on your description, is that whatever is calling
the API is using the HTTP 1.0 protocol. This protocol, which was
published in 1996, did not offer the "chunked" transfer coding. That
would also explain why "Connection: close" is in your header info -- it
is telling it to keep receiving until the connection is closed.
Starting with HTTP 1.1 (originally published in 1997, but didn't become
mainstream until further revisions were made in 1999) the chunked
transfer coding is part of the core concepts of this protocol. One
important idea in 1.1 is to keep the connection open for multiple
transfers, because it reduces the overhead of negotiating a new TCP
channel (and frequently TLS parameters as well.) 1.1 is absolutely
ubiquitous, although it's gradually being replaced by HTTP/2, it is
still by far the most widely used HTTP version.
My guess is that whatever is calling your API is still using 1.0, which
didn't support chunked coding -- that's weird, I haven't seen a client
still using this older protocol in well over a decade, but I'm sure they
are still out there. But it perfectly fits your description.
So -- if the server allows you to specify the content-length header,
that should solve the problem for you. (However, in my experience,
many HTTP servers don't allow it -- they discard the content-length,
since they consider it their responsibility to establish this, not the
application's responsibility. They also often handle CCSID translation,
which can change the length of the content.)
If adding content-length is working for you, I guess it's a good
compromise for now. I would strongly recommend updating the caller to
use 1.1 or newer, though. 1.0 is well over 20 years out of date.
On 10/18/2022 3:39 PM, smith5646midrange@xxxxxxxxx wrote:
I don't know where the chunk data processing problem is but...
I created another version of yajl_writeStdout that included the Content-length: parameter in the header and it is now returning just the JSON data.
No more chunk data.
-----Original Message-----
From: Scott Klement <web400@xxxxxxxxxxxxxxxx>
Sent: Tuesday, October 18, 2022 12:45 PM
To: Web Enabling the IBM i (AS/400 and iSeries) <web400@xxxxxxxxxxxxxxxxxx>; smith5646midrange@xxxxxxxxx
Subject: Re: [WEB400] YAJL
They aren't "extra" they are part of the HTTP protocol, it is telling the HTTP client how many bytes need to be received.
It should not be included in the data that you process -- they are there just to tell the HTTP program how to receive the data, it should not be including them in the actual downloaded result. Something is wrong with the HTTP client that is calling out to the Apache.
On 10/17/2022 9:25 PM, smith5646midrange@xxxxxxxxx wrote:
The program that is receiving the data is expecting pure JSON data to be returned. Instead is it receiving these extra bytes prior to the first { and is abending.
As an Amazon Associate we earn from qualifying purchases.