× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



That's quite a post Henrik! There is a bunch to absorb.

Some very interesting analysis.


--
Jim Oberholtzer
Agile Technology Architects


-----Original Message-----
From: WEB400 [mailto:web400-bounces@xxxxxxxxxxxx] On Behalf Of Henrik Rützou
Sent: Sunday, February 26, 2017 9:33 AM
To: Web Enabling the AS400 / iSeries
Subject: [WEB400] a little for the node.js / javascript freaks on a sunday afternoon

*First a little elaboration about the use of the EVAL instruction, JSON and objects in javascript.*


Originally javascript was a script language that was executed statement by statement of the script code.


Today javascript is JIT compiled into binaries to be executed by the javascript VM. This is why you get javascript syntax error’s already at load time but also why javascript is a very fast language.


The problem and why EVAL should be avoided is the EVAL is an instruction that may create executable code within the script itself. So whenever the
V8 engine meets an EVAL instruction that is executed it will decompile the running code and maube recompile the code after the EVAL instruction or it will fall back and execute the javascript in bytecode creating huge overheads in processing time.


If you have the time see this presentation by Lars Bak on how V8 handles
code: https://youtu.be/r5OWCtuKiAk


In regards to JSON (that is what it say’s it is - an intermediate string based notation of an javascript object or array) - functions are allowed in a javascript object, not in JSON:


___ myObj = {}

___ myObj.MYSCRIPT = function(message) { alert(message) }

___ myObj.MYSCRIPT(“Hello World”)


But if you stringify MyObj to JSON it will drop MYSCRIPT since typeof ‘function’ isn’t supported in JSON because any imported JSON would then trigger a recompile.


An ugly work around is to stringfy the function in the javascript object before you convert it into JSON:


___ myObj.MYSCRIPT = myObj.MYSCRIPT.toString();

___ myJSON = JSON.stringify(myObj);


And in the receiving end


___ myNewObj = JSON.parse(myJSON);

___ myNewObj.MYSCRIPT = eval('(' + myNewObj.MYSCRIPT + ')')


That of course will trigger a decompile and/or shift to bytecode L


It is important to remember that JSON never has been intended to be a data interchange format but originally is made for cloning objects since direct copy of objects in javascript is done by reference.



___ myObj = {}

___ myObj.NAME = “Henrik”

___ myNewObj = myObj

___ myNewObj.NAME = “Scott”


Will also change the content of myObj since myNewObj just is a name that points back to myObj. (this is exactly what happens in my clean up code in the result of db2/db2a )


To create a shallow copy as a clone of myObj you need to convert it to a string and then create a new object without references to the old.


___ myJSON = JSON.stringify(myObj)

___ myNewObj = JSON.parse(myJSON)


To create a deep copy as a clone of myObj (a copy that includes the original object properties but have no reference to the old object) requires a lot more coding but can be done.





*IBM’s db2/db2a DB2 connector*


Many think that IBM’s DB2 connecter in node.js returns JSON in the sqlResult, It doesn’t – it returns a javascript object an I order to find out what is inside the object (actually in any javascript object) there are two native ways without using tools:


___ myJSON = JSON.stringify(sqlResult)


That doesn’t include object elements typeof ‘function’ or object properties


or


console.log(sqlResult)


that (at least in crome and node.js ) returns the entire object including object properties to the console

and not … console.log(“my sqlresult: “ + sqlResult) that just like an
alert() returns “my sqlresult: [object] [object]” in the console


To summarize on db2a

- Returns a javascript object, not JSON

- Terrible slow (not only an IBM i problem)

- All fields are returned as typeof string regardless of the type
of fields you specifies

- All fields are returned with writespaces / all string in
javascript are variable length


Work around code:



stmt.fetchAllSync(function callback(result) {


// clean up IBM's db2a mess

var rowObj = {}

for (i= 0; i < result.length; i++) {

___ rowObj = result[i]

___ for (var key in rowObj) {

______ if (rowObj.hasOwnProperty(key)) {

_________ if (typeof rowObj[key] == 'string') {

____________ if (isNaN(rowObj[key]) == false && rowObj[key].trim() != '') {

_______________ rowObj[key] = parseFloat(rowObj[key])

____________ } else {

_______________ rowObj[key] = rowObj[key].trimRight()

____________ }

_________ }

______ }

___ }

}






*IBM and CCSID*


This is probably an error since file written by FS starts i CCSID 819 but if the are rewritten a couple of time they shift to CCSID 1208




*IBM and QP0ZSPWT jobs Hangs and locks the IP port*



Annoying error, when a nodes program is ended wirh an EDBJOB *IMMED sometime the QP0XSPWT hangs until next IP. Ehat is worse is that it blocks the IP port.




*IBM S812 - one core or 4 cores*


When IBM annouced this new entry level POWER8 they have excluded 3 cores if it is ordered with IBM i while they are included if ordered with Aix - one core is a node.js killer and one is better of on a $500 PC with 4 cores, is that the direction IBM wants us to go?






*powerEXT zdb – a NoSql database for node.js*


As a consequence of the SQLs slowliness I have decided to build a little NoSql DBMS that are platform agnostic. My tests show a RLA access time in node.js on 0.006ms for keyed index search, 0.012/0.014ms for corresponden object retreival that is acceptable compared with native RPGLE RLA on 0.048ms.


This is of course not for big tables but for referential tables that can be bound to a SQL result by late bindings making the SQL simpler and faster.


Btw. IMHO SQL is also over-engineered and has become a "write a whole program in one (very long) statement". At the same time SQL has become DB propietary wich makes it unportable. Don't expect a SQL statement written for DB2i to run on either DB2 LUW or SQL Server or vice versa.


Besides that I build program generators where some generates EXTJS UI code on the fly while other generates server side code such as REST/CRUD services where I have decided maybe best resides in their systems native environment and with native I mean not only native IBM I SQLRGLE but also native .NET C# since my target are both IBM I and MS SQL Server customers, but that’s another story (*).


Program generators (whatever program language that is the output) requires a lot of specific metadata, templates and in many cases access to SYSCOLUMN.


If the program generation is intended for ‘on the fly’ UI generation at also requires user rules and all these data can’t be put into a SQL View where a single row request in a table may take 50ms in node.js and where a big UI may require 5,000 lookup’s it will take 25 seconds in SQL while it only takes approx 75ms in zdb.


The database will work in any number of HTTP Daemons one may launch; its physical data is placed in an IFS file per. table in JSON format while it runs in javascript objects internally in node.js.


The IFS file may be overridden (renewed) by either node.js or native programs that may regenerate the table from content in DB files that however may cause data loss. The synchronization with a DB table may be done from node.js by calling a REST/CRUD service where the zdb acts client.
A DB table can of course also be created by loading the table from the IFS file.


There zdb are based on a configuration file – all is still subject to
change:


var zdbConf = [

___ {

______ tableName : "AAA",

______ tablezdb : "./zdb/aaa.table",

______ tableScope : "$global",

______ tableReadOnly : false,

______ tableCloneObj : true,

______ tableAutoLoad : true,

______ tableSync : true,

______ tableSyncRest : "http://127.0.0.1:8080/pextwebcgi/AAArest.pgm";,

______ tableSyncType : "POST",

______ tableKey : ["MYID"],

______ tableMandatory : ["MYID","ABC"]

___ }

]



Creating of internal Database


___ scope = new zdb(‘scope’);



Load of tables:


(zdb represent the scope name)

___ zdb.loadAuto(scope) // load all tables in the scope with tableAutoLoad attribute = true

___ zdb.open(tableName,scope) // Manual load or forced reload



Methods:

___ obj = zdb.readByKey(tableName,key)

___ obj = zdb.readByRrn(obj)

___ ... (obj or objClone are decided by the tables 'tableCloneObj' property and is only recommended if tableReadOnly is true

___ rc = zdb.join(tableName,toObj,key,joinElementName)

___ ... joinElementName will join the zdb object in a single element otherwise elements are addedin the receiving object root and will overwrite elements with the same name.



__ … metods where tableReadOnly property = false

___ rc = zdb.write(objClone)

___ rc = zdb.update(objClone)

___ rc = zdb.delete(objClone)

___ rc = zdb.save(tableName) // force save


It all sounds relative simple, but it is not if you look at the structure any node.js has to go through before it is called:


___ HTTP Server Apache

_______ PROXY/FASTCGI

__________ LOAD BALANCER

_____________ HTTP DAEMON

________________ YOURFUNCTION


The PROXY/FASTCGI and LOAD BALANCER works very similar to the QZSRCGI programs where the server lauches a number of job so if one is busy another is chosen. Besides that you are first with the node.js environment when you enter the HTTP DAEMON who's role are to redirect requests to YOURFUNCTION and other functions.


YOURFUNTION may load data, but the function only exists between it is required by the HTTP DAEMON to it ends by sending data to the requester, afterwards it is destroyd and requested data hereby also are lost. Often used tables must be controlled and passed by the HTTP DAEMON.


Yes, there are other ways to construct node.js on, one HTTP port per App is one (results often in 100,000 statements monoliths) or requirering all functions when the HTTP DAEMON is initialized but that requires that all funtions is able to act as a module and so on. EXPRESS may also have another solution but as far I have read it doesn't met the requirements I seek.


The bottom line is that zdb can be loaded by my HTTP DAEMON and shared by any function through the object that also holds other global variables.


And please remember what powerEXT for node.js is about, if you want to run chat forums or control your neighbor's drone chose otherwise, this is about OLTP Business Applications.






*(*) The story*


When EXT JS went from version 4 to version 5 they removed their open source version and made their licenses come in packages of 5 developers. At that time they had approx. 500,000 users of their forum where most ran on the Open Source license.


They were overloaded with questions and free support and even those who ran on a single license generated the same amount (if not even more) that their “customers with a budget for both license and education” did.


The big question for me is where do I find “customers with a budget” not for powerEXT Core for node.js or IBM I that is MIT licensed - but what follows – the EXT JS UI generator. You can be dam sure these customers doesn’t run their OLTP business critical applications on mongoDB or MySQL, most will run on MS SQL Server, DB2 and in some degree Oracle.


Besides that I live in a country where 80% of “customers with a budget”
runs MS Business Solutions on MS SQL Server ;-)




--
Regards,
Henrik Rützou

http://powerEXT.com <http://powerext.com/>
--
This is the Web Enabling the IBM i (AS/400 and iSeries) (WEB400) mailing list To post a message email: WEB400@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/web400
or email: WEB400-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives at http://archive.midrange.com/web400.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.