Thursday, 22 October 2009

Unused and Duplicate Prompts

We have been developing our products for close to 20 years now and one of the consequences has been that we have found quite a few unused and duplicate prompts in the models. We also have a multiple model architecture and a policy of migrating the entire data model to each of the development models. This results in all of the prompts also being duplicated (and unused) in all of the models.

Apart from having a large number of redundant prompts in the models, it can also make the selection of prompts in the window/screen designer tedious because the large lists of unused and duplicate prompts makes locating the desired prompt harder.

There is a Gen function in the toolset to delete unused prompts, but this requires the model to be downloaded, and ours are too big. It will also not get rid of duplicates.

We have therefore written a new genIE function to both delete unused and consolidate duplicate prompts.

The result is faster downloads because you are not downloading extra prompts and also easier selection of existing prompts in the window/screen designer.

Thursday, 1 October 2009

RI Trigger Impact Analysis

The previous post discussed the need to regenerate programs that call changed RI triggers. The difficulty is in performing this impact analysis.

The Gen model does contain associations between action blocks and RI triggers (using IMPUSE associations if you are familiar with the Gen schema), but the important point to note is that these associations are maintained by the code generators. This means that if you change the data model, the IMPUSE association data in the model is not accurate until you have regenerated the affected code, which makes it useless in helping you understand what you need to regenerate!

We faced this issue when developing the impact analysis process that GuardIEn performs when it detects data model changes. To work out what RI triggers are directly affected by a data model change is straight forward, but the consequential impact on other triggers and action block / procedure steps involves a complex navigation though the data model, following cascade delete chains for example. It is also affected by the choice of generated or DBMS enforced RI rules.

Tuesday, 29 September 2009

RI Triggers

In my experience, one of the more misunderstood aspects of Gen development is RI triggers, and the impact of making a change in the data model.

In many cases, users think that they only need to regenerate the RI triggers once they have made their data model changes. Unfortunately it is more complex than that.

The correct process is:

1) Implement any changes to table or column names in the data structure
2) Run the Ref. Integrity Process (accessed from the toolset Design menu) to synchronise the technical design RI constraints with the data model
3) Generate the affected RI triggers
4) Re-generate all action blocks that call the re-generated RI triggers
5) Install re-generated code

Steps (1) and (2) can either be performed separately or accomplished using the Retransformation tool.

The reason why you need to regenerate the action blocks that call the RI triggers is that the code generated for a DELETE, DISASSOCIATE, TRANSFER, etc. depends on the RI rules in the model. If these rules change (for example changing a relationship from cascade delete to pendent delete), then the action blocks require regeneration as well as the RI triggers.

Tuesday, 22 September 2009

Where is your source code?

For your Gen applications, the answer is of course that it is stored in the Gen encyclopaedia. But what about your non-Gen code like:

  • External Action Blocks
  • OLE files
  • Bitmaps, icons
  • DDL
  • Documentation
  • etc.

Unfortunately we often come across projects that do not adequately manage the non-Gen source code, probably because they do not have this issue with the Gen code. There are cases of projects where they have lost their EAB source code and do not even have documentation for what the EAB did apart from the stubs in the Gen model.

Another common instance is where you use an OCX control in a GUI design. Gen creates a .ole file in the local workstation model sub-directory which contains properties for the OCX control. However this is not uploaded to the model, so if you delete the local model directory and have not saved the .ole files elsewhere, they are lost!

You should therefore take care that all project source code is properly managed. This could be as basic as ensuring that the files are stored centrally in a place where they will not be deleted, either accidentally or as part of a housekeeping routine.

Even better, the external code should be properly versioned controlled. There are many tools for this, ranging from simple and free source code control tools to more sophisticated products. Our own XOS tool has been designed specifically for managing Gen externals, including support for automatically versioning .ole files when a subset is uploaded to the encyclopaedia.

Wednesday, 26 August 2009

An unexpected feature of Object Migration

An unexpected feature of the CA Gen object migration utility that sometimes catches us out is when you migrate an action block and the view matching of action blocks that use it is also affected.

Consider the example where AB1 uses AB2. If you add a new import view to AB2 and view match it to an existing view in AB1:

AB1:
USE AB2
IMPORTS: temp xxx to in xxx


and then migrate just AB2 to another model, if the view temp xxx in AB1 has common ancestry between the two models, then the view match is also migrated, which in effect modifies AB1 even though this had not been selected for migrate. However AB1 does not get a modified timestamp, so it looks like AB1 has not changed even though its view matching has.

This may not necessarily create a problem, but it does sometimes cause confusion.

Friday, 14 August 2009

Gen and null columns

A recent posting on the Duick forum regarding NULL column support lead to a discussion on the Gen qualifier IS EQUIVALENT TO and a potential misinterpretation of the way that this works.

As a bid of background information, it is important to understand how a nullable column containing a NULL value behaves. Consider a table with a nullable column and these rows:

Id opt_column
1 ' ' (column has a value of spaces)
2 NULL(column is NULL
3 'X' (column has a value of X)


If you want a value of NULL to be equivalent to SPACES, and you want to read rows that have a space or null in the opt_column, then if your SQL was:

SELECT * FROM table WHERE opt_column = ' ';

would return just row 1 but

SELECT * FROM table WHERE opt_column = ' ' OR opt_column IS NULL;

would return rows 1 and 2

If you want to read rows that do not have ‘X’:

SELECT * FROM table WHERE opt_column != 'X';

would return just row 1 but


SELECT * FROM table WHERE opt_column != 'X' OR opt_column IS NULL;

would return rows 1 and 2

Once you understand the need for the IS NULL or IS NOT NULL qualifier in the SQL, you can write the READ qualifiers in the action diagram code.

The confusion arises over the use of the IS EQUIVALENT TO clause since it is likely that this does not work the way you expect!

For example, if an optional column has no value, then I think of SPACES & NULL as the same, so you would code:

READ table WHERE opt_column = SPACES OR opt_column IS NULL

However the statement READ table WHERE opt_column IS EQUIVALENT TO SPACES

gives the following SQL which is not the same:

SELECT opt_column FROM table
WHERE (opt_column = ' ' AND opt_column IS NOT NULL)


This means that if the column is NULL it will not return a row, which is the opposite of what I think you want.

IS EQUIVALENT TO is OK. For example, if you want a row with a specific value:

READ table WHERE opt_column = ‘X’ AND opt_column IS NOT NULL

Is the same as READ table WHERE opt_column IS EQUIVALENT TO ‘X’

which gives the following SQL:

SELECT opt_column FROM table
WHERE (opt_column = 'X' AND opt_column IS NOT NULL)


IS NOT EQUIVALENT also gives the desired result for a specific value but not for SPACES:

READ table WHERE opt_column NOT = ‘X’ OR opt_column IS NULL

and READ table WHERE opt_column IS NOT EQUIVALENT TO ‘X’ gives:

SELECT opt_column FROM table
WHERE (opt_column <> 'X' OR opt_column IS NULL)


But if you want a row where the column is not spaces, you would code:

READ table WHERE opt_column NOT = SPACES AND opt_column IS NOT NULL

but READ table WHERE opt_column IS NOT EQUIVALENT TO SPACES gives:

SELECT opt_column FROM table
WHERE (opt_column <> ' ' OR opt_column IS NULL)


In summary, it is best not to use EQUIVALENT with SPACES unless you want the behaviour that the generated code gives you. You would also need to be careful with a view that might have a value of spaces.

Thursday, 13 August 2009

Documenting Changes

Whether you are developing software that is commercially available or for internal use, users value a clear description of the enhancements and fixes introduced in a new release or service pack. This is especially important when there are changes in behaviour or actions that need to be taken by the users to take advantage of new features.

A customer recently complemented us on the quality of our release notes and asked whether we generated them from a database.?

Unfortunately there wasn’t a magic solution. We cut and paste the descriptions of each change into a Word document and then generate the PDF file from that. That part is simple though. The harder part is to ensure that each change is documented properly and is not accidentally omitted from the release notes. We therefore document the changes using the following process:

1) Each change must be documented in a form that will make sense to the end user, explaining the business reason for the enhancement or requirement for a fix. The documentation is in the long description of the Change Request (CR) in GuardIEn.

2) Each CR should address a single problem or enhancement. You should avoid CRs that span multiple requirements (the worst cases being a single CR that has all changes in it or a CR for changes made by a developer that is not linked to the actual requirements.

3) If a new requirement is found whilst changing some code, a new CR should be created for it and the temptation to ‘hide’ the new requirement within the scope of the existing CR avoided.

4) Once the CR has been completed and tested, the description should be reviewed for accuracy and any changes in behaviour noted.

5) The Release Notes should be updated with the CR and user documentation reviewed and updated as necessary. We have a separate state in the CR life-cycle to indicate that the documentation has been updated.

Tuesday, 14 July 2009

Almost like having a new machine

We recently upgraded our anti-virus software to the latest 2010 release and it was immediately noticeable how much slower our machines were, especially our heavily used CSE machine. We also found the desktops to be much slower, and so the new release was de-installed and the older (2009) release re-installed. It still took up to 25% of the CPU though, and so we decided to try some alternatives. After a bit of research, we selected one of the other leading products to trial. Both desktops and the CSE run much faster and it is like having a new machine and so the upgrade can be delayed for a while!

Monday, 29 June 2009

In praise of integration

Having spent over 20 years developing our products using Gen, it is clear that one of the main benefits is the low cost of maintaining applications developed with Gen. I think that there are many reasons for this, some of which are due to inherent features of Gen and others derive from the methods and standards used by the development project. In my view, a key feature of Gen that contributes to the low cost of maintenance is the integrated nature of the analysis and design tools.

The early marketing of IEF (as Gen was called in the early days) emphasised the integrated nature of the product and IEF was called an i-CASE (integrated Computer Aided Software Engineering) tool to distinguish it from point solution CASE tools. Unfortunately many i-CASE tools were nothing of the sort and few if any came close to delivering the 100% code generation and great success of Gen. This resulted in the CASE / i-CASE market getting a bad name, through little fault of IEF.

However. having chosen the best integrated development tool, shouldn’t a Gen project maximise the benefits of that integration? The trend to only use Gen for the server and batch parts of a project concerns me. Whilst there are undoubtedly situations where Gen is not the best choice for developing the user interface, I suspect that there are others where the choice not to use Gen for the front-end has been a mistake due to the resulting increased cost of development and maintenance.

When the user interface is developed with a separate tool, the interface between the presentation layer (client) and the business logic (server) has to become much more formalised at an early stage in the life-cycle, especially when the client and server parts are developed by separate teams. Even if you are using CBD/SOA or some other development approach that advocates stable, published interfaces, there are still many situations when a rapid, iterative approach to development will benefit from having one person develop the client and its closely coupled servers at the same time and with the same tool.

The goal of 100% code generation and integrated nature of Gen means that there are boundaries to the product's capabilities. Whilst there are features that allow external code (external action blocks, OCX controls, etc.), there are still limitations on what can be accomplished with Gen. The perceived weakness of Gen for developing sophisticated user interfaces has made some Gen projects avoid Gen for the user interface or presentation layer of an application.

A few years ago, I was visiting a long standing Gen user who had used Gen very successfully to develop 3270 and batch applications. I demonstrated GuardIEn to the development manager, and then we went for lunch. He explained that they were now moving to client/server but had decided not to use Gen for the front end because they did not think that you could develop a good front end. I asked him what they were looking for, and his response was that they would like to be able to develop something that looked like GuardIEn! He did not realise that GuardIEn was a Gen developed application with the user interface created using the same Gen design tools that they had decided were inappropriate.

Now, to achieve the sophisticated look and feel of our products with Gen has not been easy. We have had to develop an add-on tool (IETeGUI) and learn how best to achieve the desired results. But is this not the case with any tool? Don’t just take the product out of the box and expect to develop a very sophisticated user interface immediately. It needs quite a bit more work than that – probably more than you would expect. It is not easy to create a great user interface with Gen, but it can and has been done, and in my view, the extra effort is more than compensated for by the significant reduction in development and maintenance effort through the use of an integrated tool with 100% code generation.

Monday, 15 June 2009

Dog Food or Champagne?

There is a saying about eating your own dog food, or the more pleasant version, drinking your own champagne. The point is that if you really believe in your own product, then you would use it yourself, and therefore I prefer the dog food analogy since you would only eat your own dog food if it was really palatable, whereas you might be prepared to drink anything that is alcoholic!

Anyway, getting back to the main point, if you are a software developer and you can use your own products, then you have a big incentive to improve them for your own benefit. This is why I was really pleased when I heard that CA would be using Gen within their development team as part of the Mainframe 2.0 initiative.

Because we develop our products with Gen, we are also able to use our own tools as well, and this positive feedback loop has resulted in many improvements and enhancements to make the ‘dog food’ as palatable as possible. An example of this is in the area of version control.

One of the most useful tools in the armoury of a developer is the ability to see what has changed in the source code. The ability to see the what, why, when and who (what has changed, why was it changed, when was the change made and who made it) makes diagnosing a problem much easier. With Gen, a single model can only contain a single version of an object, so if the object is changed, you lose the ability to see what it looked like the moment before the change, unless you have saved the previous version somehow (via migration, model copy, etc.).

Since it is impractical to save the previous version every time a change is made, often the diagnosis of a problem is made unnecessarily hard because this useful information is not available. For example, a user reports a problem in the test system that they noticed a few days ago. In the meantime, the model has been changed and you are therefore unable to see what the changes were (only that the object was last changed on a specific date/time). If you cannot reproduce the problem, you cannot then tell if the problem has been fixed, or if your test case does not properly test for the issue.

We have found the ‘minor versions’ feature of GuardIEn especially useful. This allows you to track every change made to a Gen object and see the who changed it, when it was changed and what was changed (down to properties and individual action diagram statements). When linked to a GuardIEn Change Request, you can also see why it was changed and what other objects were also affected by the same change.

I know that we would say this anyway, but we have found this capability to be invaluable in the on-going maintenance of our products.