This is an occasional blog about IET's use of CA Gen for internal development as well as thoughts, tips and techniques on using CA Gen for application development. It is aimed at the CA Gen development professional, so please excuse the jargon and assumed level of knowledge about CA Gen. Reference will also be made to our products to put the development into context, so if you are not familiar with these, please visit the IET web site by clicking on the company logo.
Monday, 30 March 2020
View Descriptions
Thursday, 16 January 2020
Action Diagram Bookmarks
They wanted the ability to add bookmarks into the action diagram so that they could quickly and easily move between different sections. For example, if you have to edit code in different places and move up and down between them, rather than trying to find the locations by scrolling up and down, it is much easier to place a couple of bookmarks and then jump between them.
Monday, 25 March 2019
Phased Upgrades for GuardIEn and CA Gen
Friday, 17 March 2017
Bye Bye TIREVENT
SET temp_tirevent.ief_supplied.command TO "CLICK"
USE tirevent
WHICH IMPORTS: temp_tirevent.ief_supplied TO Work View in.ief_supplied
Wednesday, 21 October 2015
Inline Code Example
Friday, 17 July 2015
Managing Web Services
We were recently asked by a GuardIEn customer whether they could use our XOS (External Object Support) add-on to manage the generated WSDL/XSL files so that as they promoted a change through the life-cycle, these files could be 'migrated' to the next environment along with the object migration of the procedure steps.
Whilst this would have been possible, it raised an interesting point about whether generated code should be copied between environments or re-generated from the model in the next environment,
I don't think it is a good idea to copy the generated source code from an uncontrolled environment like the first development model. The reason for this is that the state of the model and in particular the synchronisation between the model and the generated code is difficult to establish. This is especially true for code generated from the toolset, since it could be generated without uploading the changes to the encyclopaedia, or the model could be changed without re-generating the code.
At IET we recommend that the Gen objects are migrated to the next environment's model(s) as part of a controlled 'system update'. When using GuardIEn, the automated impact analysis will ensure that all of the code affected by the migrated objects is regenerated and installed. Generating direct from the encyclopaedia ensures that the model and generated code are synchronised.
A further complication arises because the web service definition can contain interfaces to multiple procedure steps. The wsdl and xsl files are tied to the import and export views for the procedure step and need to be regenerated when the procedure step's interface changes. What happens if you have a web service for multiple procedure steps, change the interface to two or more of them and then only migrate one of the changed procedure steps to the next environment? If you copy the entire web service, the result is that interface definitions in the web service do not match the view structure for the procedure steps in the next environment.
It therefore seemed a bad idea to copy the generated wsdl/xsl files from the toolset and the same principle of re-generating code from a stable model should be adopted. The problem though was that the web service generation feature of CA Gen was only available from Gen Studio and not the encyclopaedia generation.
In consultation with the customer, we decided that the best approach would be to develop a new web service generation feature in GuardIEn so that the web services could be properly managed via re-generation from the model as part of a GuardIEn system update. This new feature is now available in the latest service packs released yesterday.
Monday, 3 February 2014
Exit State CGVALUE
IF exitstate IS EQUAL TO database_updated
The generated code does not test the exitstate name, instead it tests a special property of the exitstate called CGVALUE. The advantage of testing the CGVALUE is that this does not change if the exitstate name is changed or when the exitstate is migrated between models.
When you create an exitstate on the toolset, an initial value is assigned, but when the model is uploaded to the encyclopaedia, a new model-wide CGVALUE is assigned to the exitstate that will differ from the value initially assigned in the toolset. This means that any code generated on the toolset prior to re-downloading the model or subset will still use the old value but any newly generated code will reference the new value.
Therefore when you create a new exitstate, you should ensure that any code generated on the toolset that references the new exitstate is re-generated once you have re-downloaded the model/subset to ensure that the CGVALUE is consistent.
Rapide uses the CGVALUE in the window XML files and for externalising the exitstate messages in the strings file and hence you will also need to regenerate the Rapide window manager files and the strings file as well as the generated code.
Friday, 2 August 2013
Individual Model Backups
What happens if you have a problem with one model caused by a corruption or 'user error'?
A common technique for backing up individual models from the CSE is to run a frequent download "with upload option" of the model to create an update.trn for the model to be used in case of an issue with the model. if you need to revert to the backup version of the model, you can rename or delete the current model and upload from the update.trn file created by the download.
A major problem with this approach is that it creates a new model in the CSE that does not retain ancestry with other models in the same project. You then need to adopt the model, which might be very time-consuming if it needs to be adopted to multiple models, and can also be error-prone since if any of the objects that previously had ancestry have since been renamed in one of the models, the adoption will not re-establish ancestry.
The technique that we use is to run an extract with apply option instead. This creates a child model which can then be loaded back into the CSE with ancestry retained. Child models are not very usable, so there is an additional step involved to copy the child model to create a full model and then delete the child model. However ancestry is retained.
At IET we have a weekend automated script (executed as a user defined task using the Task Assistant) that extracts all of our current models to a backup file as additional level of backup over and above the database backups and archive logs. Thankfully we have not yet had to use them, but it is comforting to know that they are available.
Friday, 26 October 2012
PStep USE and Stubs
pstep1 --> ab --> pstep2
This worked fine until one day they re-generated the server manager for server1 and when the action block attempted the use of server2, there was a runtime failure.
The reason was because at some point they had converted the action block in this model to a 'stub' (it is developed in a different model). A p-step USE is implemented in the generated code by a runtime function call that references the called server details which resides in a table that is generated into the server manager. The Gen server manager generator needs to know about all of the servers that are called by the p-steps and subordinate action blocks. If some of these subordinate action blocks are 'stubs', then the generators will not detect that the action block calls a server and that server will not be added to the table.
However the code is still generated and the load module build will succeed. The code will also run until the point when the p-step use is invoked, at which point a runtime error will result.
This can be difficult to detect, and so we will develop a new VerifIEr check to detect these conditions.
Thursday, 7 June 2012
Checking Data Integrity
An orphan foreign key is a foreign key value (for a simple or compound key) where the parent row does not exist.
For example, in the case of table PARENT has many CHILD, the key of PARENT exists as a foreign key in the CHILD table. If the CHILD FK_PARENT_CODE has a value where the PARENT row does not exist with the same code, then the CHILD foreign key is an orphan.
This situation can arise from errors in the RI trigger runtime routines, use of SQL to delete the parent rows without deleting / nullifying the child rows, incorrectly generated RI triggers or incorrectly implemented DBMS RI rules. In the case of GuardIEn, the most common causes are use of SQL to (incorrectly) load rows or delete rows, and on some platforms, issues with the RI trigger runtimes.
To help identify and fix these integrity issues, we have added a new genIE function that generates the SQL to firstly identify orphan rows and then to perform a cleanup.
We will be distributing the SQL to GuardIEn users so that they can check the integrity of the database, and the new function will be available in 8.1.4 so that customers can generate the SQL to check their own application databases.
Friday, 2 March 2012
Use of IN clause
Whilst there may no longer be any performance benefit in converting to this syntax, an IN clause can make a complex READ statement more readable (no pun intended).
Consider the example below:
This can be re-written using IN clauses:
Even better, the improved READ EACH statement above was automatically converted by using a VerifIEr check that detects READ statements that could use an IN clauses and then invokes the integrated genIE fix to convert the READ statement.
Friday, 17 February 2012
z/OS Operations Libraries and Dynamic Linking
One difference between Windows/UNIX and z/OS relates to the use of the MVS Dynamic Linking property of an action block (or business system default). To be eligible for packaging into a z/LIB, the action block must be marked as dynamic (not static or compatibility).
You should be careful when converting from purely dynamic action blocks to z/LIBs. If you only package the top level action blocks into the z/LIB, the lower level action blocks will be called dynamically from a separate load module if they remain dynamic and are not packaged into the z/LIB.
Consider the following example: AB1 calls AB2 and both are marked as dynamic, either as a business system default or explicitly.
If you create a z/LIB and only add AB1 into the z/LIB, since AB2 is private to AB1, then the z/LIB will only contain AB1 and AB2 remains an 'old style' dynamic action block with its own load module and the call from AB1 to AB2 is dynamic.
If AB2 is private, it should either be changed to static so that it is only statically linked into the z/LIB, or added to the scope of z/LIB so that it is also linked into the z/LIB.
Thursday, 5 January 2012
Unadopting action blocks
The solution was to give the second copy of the action block different ancestry, but how, since there isn't an unadopt function on the CSE.
The workaround was as follows:
- scope a subset containing the ABs that you want to unadopt
- download the subset
- generate a new (temp) model on the CSE from the subset so that the ABs in the new model have new ancestry
- selectively adopt just the business systems in the new model to the old model
- selectively adopt the ABs in the old model to the ABs in the new model so that the ancestry in the old model changes
- delete the new (temp) model
Thursday, 8 December 2011
View Starving or Perfect View Matching?
The technique of view starving involves removing all redundant attributes from views to avoid unnecessary storage allocation and view initialisation logic.
The technique of perfect view matching involves ensuring that the view structures in a USE statement are identical so that the program call can be achieved via passing of the actual views.
The question related to the problem that if you starve the views, you might not get perfect view matching, and hence should you then allocate a new local view that has perfect view matching and add in extra statements to manually maintain this temporary view to achieve perfect view matching for the USE statement.
If you do not have perfect view matching, then Gen needs to generate an intermediate data structure in the calling program to accomplish the parameter passing on the call of the used action block.
Data needs to then be passed from the actual views in the calling AB to the intermediate data structure and therefore there is an overhead for a) allocating the storage for the intermediate structures and b) the instructions to move the data.
Therefore in this situation, there is little difference between doing this yourself via a local view or getting Gen to automatically generate the extra structures – it amounts to the same thing, at least for one USE statement. Note that the order of the views is also important. Your local view would have to have the same attributes in the same order, and to work this out requires a careful comparison of the two views since the view matching dialog will not indicate if they are ordered differently.
It only becomes more efficient to define your own views if you have many USE statements that would make use of your fully populated view, since Gen would generate the data moves for each USE whereas you may only need to move the data once for multiple calls. However this becomes more complicated to understand and maintain. It only needs one attribute to be added, deleted or even moved in sequence for the technique to not accomplish perfect view matching, and then you get a double overhead – your views and the extra generated code. The additional code also complicates the action diagram, often for little benefit.
In an on-line transaction with a single USE statement, the overhead is not worth worrying about. You should concentrate on achieving perfect view matching where it will affect performance. This would typically be for large group views and repeated calls to the same action block, for example, in batch jobs where the same action block is called within a loop that is executed many times.
With our automated code checking tool VerifIEr, we have a perfect view matching check. This will work out whether you have perfect view matching or not, and can be configured to only check group views and USEs within loops, so that you only have to focus on the important ones. VerifIEr can also check that attributes are used, and hence help with view starving.
Where you want to have perfect view matching for performance reasons, I would add in the extra attributes to the ‘proper’ views rather than add in extra views, since there would be no benefit from adding in the extra views. The only time this might be needed is if there are many called ABs with different import views and all sourced from the same view in the caller. However in this situation I would then let Gen handle this situation rather than introduce my own additional views.
Friday, 8 July 2011
RI Triggers – Gen or DBMS?
We recently had a discussion with a customer regarding the difference between Gen and DBMS RI enforcement.
The advantage of using DBMS RI is that the RI integrity is maintained by the database and not Gen generated code and hence any program or interactive SQL that deletes records will ensure RI integrity is maintained, whereas with Gen RI, you must either always perform deletes using Gen programs or ensure that your non-Gen programs or SQL correctly maintain RI integrity by cascade deleting child rows, setting foreign keys to NULL, etc.
However one important consideration is that many DBMS products do not support the full range of delete rules that can be defined in Gen. One example is a pendant delete, where the parent row is deleted when the last child is deleted. In this situation, Gen will enforce the rules that cannot be enforced by the DBMS, so that the generated RI triggers contain a mixture of Gen and DBMS enforced rules.
The danger with this situation is that you might think that all RI is enforced by the DBMS and hence not worry about deletes performed outside of Gen, however the DBMS would only be performing some of the deletes and hence the results would differ between using Gen to perform a delete compared with non-Gen programs.
Another consideration with DBMS RI is that you must ensure that the DBMS rules are kept up to date, on all databases, for example, development, test and production.
For these reasons, we use Gen enforced RI for our products.
Wednesday, 23 February 2011
GuardIEn and Versions
I was recently asked to clarify how GuardIEn handles code versions, and thought that posting the reply might be helpful to others.
GuardIEn stores meta-data about versions (i.e. information about the versions) rather than the versions themselves, and the actual versions of objects have to be stored as objects in Gen models. Hence to be able to ‘use’ any prior version in a Gen process like a migrate, it must exist in a model.
If you consider an example of a 3-level development hierarchy with DEV, TEST and PROD environments, then as long as you have a Gen model associated to each environment, you could have up to 3 separate versions of an object, one per model.
- If you start with the same copy of the object in each and assign it Version 1, then in GuardIEn there will be one version (V1) and that version is in the DEV, TEST & PROD models.
- You change the object in DEV and upload it using the Upload Assistant, GuardIEn will create a new version (V2) and assign it to the first state in the life-cycle for the DEV environment. You then have two versions in GuardIEn and two versions in the Gen encyclopaedia – V1 in TEST & PROD and V2 in DEV.
- You migrate the object from DEV to TEST and the status of V2 will be updated to a TEST state and you still have two versions, V1 in PROD and V2 in DEV & TEST.
If you want to backout the change in TEST, you could migrate the object from PROD to TEST (assuming that there were no other changes in the model that would prevent this migration) an reset the status of the version to a DEV state.
Once V2 is migrated to the PROD model, you no longer have a copy of V1 in any model and could not go back to V1 through migration.
It is often useful to be able to see what the contents of previous versions was even if you do not have them in a model, and to support this, we have support for Minor Versions. Whenever an object is changed via Upload Assistant or genIE, a copy of the object is taken as a text file (PAD listing for AB/PROC, objects & properties report for ENT/WAS, source code listing for XOs) and stored in the GuardIEn database. This provides a full audit trail of each change made to an object. Whereas in a Gen model you only see the latest copy of an object and hence the detailed who/what/when of multiple changes to the same object are lost in Gen, with Minor Versions you would see all of the changes separately.
This is then a very useful resource, not only as an audit trail of changes, but also for problem solving. The ability to see the what, why, when and who (what has changed, why was it changed, when was the change made and who made it) makes diagnosing a problem much easier. With Gen, a single model can only contain a single version of an object, so if the object is changed, you lose the ability to see what it looked like the moment before the change, unless you have saved the previous version somehow (via migration, model copy, etc.).
It is possible to configure GuardIEn to manage a backup model which is maintained as part of a system update to production. The section on Backup Migration in the System Updating Steps manual describes the process that the backup migrate step uses to scope the objects that are migrated to the backup model. However because a Gen model enforces strict consistency between objects, it may not be possible for the n-1 version of two objects to co-exist in the same model, for example, if the n-1 action block uses a new attribute from the version n entity type. Hence often you may find that the current production version of an object has had to be migrated to the backup model to support the migration of another object and the backup model does not therefore contain the previous version of the object.
Monday, 17 January 2011
Gen 8.0 FP1 and Batch z/OS Libraries
Friday, 3 December 2010
Continuous Integration and Gen
The requirement for code ‘integration’ most often results from the ability in most development approaches of allowing multiple developers to checkout the same source code and then rely on then ‘integrating’ their changes back into the master copy stored in a repository. It also typically results in an automated build process (and possibly also automated test) to ensure that the integrated changes are compatible with changes applied by other developers to the same or related source.
The objectives of CI are to improve the quality of software and reduce the time taken to deliver, especially by reducing or eliminating the costly and time-consuming integration tasks by ensuring that any integration issues are resolved early on and not as a massive exercise late on the development process.
Some of the technical issues that CI attempts to address are not applicable in a Gen environment. Only one person can checkout an object with modify access from a model and on upload, Gen ensures that at a basic level, the changes are consistent with the model, and so it is tempting to conclude that CI is not applicable in a Gen project.
However there are still several areas where CI concepts can be usefully applied.
The first relates to generated code and test environments. In our development environment, we generate code from the CSE into a shared development directory. In this way, every developer tests from the same code base and does not need to worry about maintaining their own private source code, object code or database. There are several other benefits of using server based code generation over local toolset generation, and perhaps this could be the subject of another post sometime…
It is important that the changes to the model, once uploaded, are correctly generated. We therefore use GuardIEn’s Upload Assistant to automatically perform the impact analysis and then generate the affected modules after each upload. In this way, the generate/build process ensures that the development code repository is kept up to date, and any errors are trapped at an early stage.
Another aspect of CI is ensuring that quality control is applied continuously. Numerous studies have shown that errors are far cheaper to fix if they are detected and fixed at an early stage in the life-cycle. We run about 25 checks on the objects changed automatically on upload using the integration between VerifIEr and the Upload Assistant. These checks detect common errors in the code (for example missing view matching or redundant code) and whilst the errors should be detected during testing, it is far easier and cheaper to correct the errors whilst the subset is still downloaded and before time has been wasted generating and testing the code.
Monday, 8 November 2010
Alternative View Mapping Technique
AB1
->AB2
-->AB3
--->AB4
---->AB9
->AB5
-->AB6
--->AB7
---->AB8
------>AB9
With some very complex structures involving hundreds of possible paths through the logic, this can involve a lot of extra views being created in the intermediate action blocks in the calling chain and the potential for not mapping some of the views, thus the data is lost during the calling chain.
A technique that we have used to provide an alternative method of passing data around is to have a common action block that stores the data in an uninitialised local view.
The logic of the action block is as follows:
SAVE_DATA
IMPORTS
in action code
link my_data string (exported)
LOCALS
temp my_data string (not initialised)
IF in action code = 'P'
MOVE link my_data to temp my_data
ELSE
MOVE temp my_data to link my_data
The revised logic for the application is now:
AB1:
SET temp action code to 'P'
SET temp my_data string to 'whatever data you want to pass'
USE SAVE_DATA
WHICH IMPORTS: temp action, temp my_string
Any action block that wants the value of my_data can then use SAVE_DATA to get the value without it needing to be passed on every intermediate USE statement.
Note that this technique will only work within a single load module and cannot be used to share data across load modules unless SAVE_DATA is created as an external action block with shared memory.
In the vast majority of cases, you should still use view mapping, but there might be some cases where the above technique will allow you to easily share a small amount of temporary data between a large number of action blocks without needing to include it as data passed on all USE statements.
Friday, 30 July 2010
Parallel Processing
One of the recent enhancements we have been working on for the next release of GuardIEn is to reduce the overall elapsed time for large generation jobs on the client/server encyclopaedia by running multiple generates in parallel, thus taking advantage of multi-core servers. (We had already enabled parallel generates on the host encyclopaedia some years ago, which was implemented by submitting multiple jobs).
Because our tools are developed with CA Gen, we needed to work out the best way of implementing a parallel processing architecture.
There are several design alternatives to enable multi-thread processing for a CA Gen application. For this requirement, we decided to create child processes launched from the Gen action block and have the Gen AB wait for the child processes to complete. This enabled the design to launch a variable number of parallel generates (controlled by a parameter) and issue another generate when the previous one completed. The creation of the child processes is performed by a C external.
On our current test server that only has 2 processors, we have noticed a 30% reduction in elapsed time, and we are due to implement a new server with 2 4-core processors, which we hope will show even better reductions in elpased times for large generation tasks.