Showing posts with label Experiences. Show all posts
Showing posts with label Experiences. Show all posts

Wednesday, 22 April 2015

More choices for using CA Gen to develop your UIs

I was invited to present Rapide at the recent COOLUp event in the Netherlands. Whilst the audience seemed very impressed by the product, one of the questions asked why we had decided to make a considerable investment in developing a new front end capability for CA Gen. 

What lay behind this question was an assumption that most CA Gen sites are no longer considering using CA Gen to develop new applications and some are considering moving out of Gen. 

What followed was an interesting discussion about the use and position of CA Gen within many organisations. The reality is that most sites that still use Gen have a considerable investment in applications that are still meeting the business needs, are very stable and are usually maintained by a small team of people. 

The result of this stability and low staffing levels is that often management are unaware of the scale and complexity of the Gen systems and the good value that these systems provide. It is only when they accurately estimate the cost of replacing the systems that the true value is understood, at which point the sensible business decision is to stay with Gen. This is why many sites are still using it today.

The problem that can result from this is that Gen remains in a state of suspended animation - the applications remain and are maintained, but Gen is still not viewed as a strategic or even tactical development tool. The development organisation then does not make sufficient investment in the tool to maximise the benefits that they could derive from their considerable investment in the technology, skill sets and the models that have been developed over the years.

Coming back to the original question as to why we have developed Rapide, the answer is that we wanted to provide Gen users with greater choice and more reasons to stay in Gen.

Whilst Gen is incredibly strong in the development of back-end server and batch systems with industry leading capabilities for developing robust, scalable, platform independent applications, it is generally acknowledged that the front end capabilities of Gen are not as capable, which had lead some sites to use other options for developing the user interface.

At IET we feel strongly that the best way of developing applications is to use Gen for developing both the user interface and servers. It is much more productive and maintainable, but in the past has required you to accept the limitations of Gen's user interface capabilities.

We wanted to give Gen users an option to use Gen to develop robust, multi-platform UIs with Gen including mobile and web, and this was the main reason for developing Rapide.

You can now use Rapide to easily migrate existing Gen block-mode, GUI and web applications to a more modern and responsive user interface for a fraction of the cost, effort and risk compared with re-writing the application in another technology.


Tuesday, 25 October 2011

Moving beyond the 32k CFB limit

Gen r8 IE2 increases the Common Format Buffer (CFB) limit from 32k to 16M.

The increase in the CFB limit is available for C generated applications on Windows and UNIX, but not in this release for z/OS, and hence we will not be able to take advantage of this for our products until z/OS support is available for CICS and IMS COBOL servers. It must be the top ranked enhancement request for CA Gen for the past 20+ years, so it is great to see this finally make it into the product.

Previous posts have discussed strategies for coping with the 32k view size limit, and our experiences with a Web View interface has been that it is a good idea for web applications to fetch and display small pages of data rather than trying to bring back a huge results set into a group view. The 32k limit therefore can be a good thing because it provides a limit on the amount of data returned in a single server call.

With the new limit, a developer could massively increase the data returned by a server to ~16M, which might be a good thing if the application did this anyway with repeated server calls, but a bad thing if the user was previously responsible for paging through the data and using filter/selection fields to limit the number of rows displayed, but they can now display thousands of rows without needing to worry about the filters.

Thinking about the impact on our own products (which are developed with Gen), there are several servers that would be much simpler with a larger export view size, so once z/OS support is available, we will take advantage of the larger view sizes to simplify the code and improve performance.

Gen 8 Interim Enhancement 2

We have just started beta testing Gen r8 Interim Enhancement 2 (IE2).

(Interim Enhancement is the new term within CA for Feature Pack).

The new features that we are particularly interested in are:

  • Increase in the common format buffer (CFB) limit from 32k to 16M
  • 64-bit Windows applications
  • Customised Java proxies

Wednesday, 3 November 2010

64 bit conversion

Gen r8 introduces the first platform to support 64 bit C code, which is HP Itanium. For the next release of our products, we will be using Gen r8 for Itanium and hence have had to port to 64 bit.

The UNIX source code generated by Gen is not specific to a particular UNIX implementation, so the same code is compiled for 32 bit on AIX and PA-RISC and 64 bit for Itanium. The difference is in the compiler options used.

One difference in the Gen r8 generated C code is that the variable used for the repeating group view 'last' flag has changed from an int to a long. In 32 bit architectures, an int and a long are both 32 bits, whereas for 64 bit, an int is still 32 bits but a long is 64 bits for the LP64 architecture used in UNIX (but still 32 bit for the LLP64 architecture used by Windows IA-64).

This means that EAB code must be modified to change an int to a long for the repeating group view variables in import and export views. You will also need to look through the EAB code to see if you have used int and long incorrectly since they are no longer the same. The same is true for pointers, which become 64 bits in both LP64 and LLP64 architectures.

Monday, 11 October 2010

Changing attribute lengths

We recently decided to increase the length of a database column (attribute) to support longer path lengths. In principle this is a very easy task:

a) Change the attribute length
b) Amend the column length in the database design
c) Use database ALTER statements to change the physical database column length
d) Re-generate the affected code

In practice however, two aspects of the change were trickier:

1) Where the attribute is referenced in external action blocks, these will need to be identified and modified.
2) Code that is dependant on the attribute length might need to be modified.

The first issue was easy to solve. We created a custom function in Object List+ that lists all external action blocks that reference the attribute in an import or export view. The resulting list was then copied to a GuardIEn Change Request and then opened in XOS. All of the affected externals could then be downloaded and modified.

The second issue was harder. We had some code that assumed the old length of the attribute (in this case 50), for example, SET text = SUBSTR(attribute,49,2) was supposed to return the last two characters of the attribute. Now I agree that this is not great code, and the attribute length could be referenced using the length function rather than 50, but it was assumed that the length would not change and the hard-coded value used instead of the length to improve performance.

To identify these occurrences, a new VerifIEr check was developed that scans for code that uses the length of an attribute as a hard-coded value. This was able to identify code that needed to be changed and can also identify any future occurrences of this style of coding that would not be tolerant of a change in attribute length.

This illustrates one of the strengths of CA Gen. Because the action diagram 'source code' is stored in a SQL database using a precise structure (as opposed to the text files used by almost any other development tool), it supports complex queries that can scan the action diagrams looking for specific coding constructs.

Thursday, 23 September 2010

Of Mice and Men

Slightly off topic, but this might be of interest to older Gen developers. The Gen toolset is very 'mouse intensive' and much work needs to be done with the mouse rather than the keyboard. After 20 years of this, some of us older Gen developers are starting to feel the strain (literally), with RSI type irritations.

I found that changing to a different type of mouse was very helpful, and after trying a few out, now use a Vertical Mouse (see http://www.evoluent.com/). You may prefer a different style of mouse, and perhaps the main benefit is to change to something different?

Wednesday, 1 September 2010

Multi Row Fetch Experiences (3)

In previous postings I have described how we have converted all of our READ EACH statements to use multi-row fetch. Also discussed were the results of a test that showed a significant performance improvement for a simple example which only had a single READ EACH statement. These improvements were extreme because a normal application will perform a lot more processing than simply the SQL for the READ statement.

On a real world example for a complex impact analysis, we have found an 18% reduction in elapsed time, which is a significant and worthwhile improvement given the low cost of implementing the changes to the model, especially since we have automated the setting of the multi-row fetch property using VerifIEr.

Parallel Generation Results

We have now implemented the new CSE server. This has two 4-core processors and we recently conducted a simple test to benchmark the improvements gained when running CSE generation tasks in parallel (See previous post for introduction).

The results were that we obtained was a 60% reduction in elapsed time when running 4 threads in parallel and 70% reduction for 8 threads.

This was obtained with no other processes running, so for normal use, we plan to restrict a single generate task to a maximum of 4 parallel generation threads because of other tasks and on-line server processing requirements.

Tuesday, 29 June 2010

Multi Row Fetch Experiences (2)

A second issue with multi-row fetch (see previous posts) affects application design.

With a normal READ EACH, each row is fetched one at a time, so if the row fetched has been affected by previous processing within the READ EACH, the fetched row's column values will be up to date.

However with a multi-row fetch, blocks of n rows are fetched into an array at the same time. If you then update row n+1 whilst processing row n, then when you come to process row n+1, the values in the entity action views will not be the latest values since they are current as of the time that they were fetched and hence not include any updated values.

This should be a rare occurrence, but worth bearing in mind when deciding if multi-row fetch is applicable.

Multi Row Fetch Experiences (1)

We have now started the development of the next release of our products using gen r8.0. One of the new features of r8.0 that we are looking forward to using is multi-row fetch because of the potential for serious performance improvements (see previous posting).

We have developed a new check in VerifIEr to calculate what the optimum fetch size should be for a READ EACH statement and then use this information to automatically update the READ EACH statement.

However our initial testing has highlighted some issues with multi-row fetch.

The first affects DB2 and relates to errors or warnings issued during the fetch. If there are any warnings or errors issued, then DB2 issues an sqlcode +354 and you have to issue further GET DIAGNOSTICS statements to get the particular warnings. We have found several instances of warnings related to truncation of data. The warning is an sqlcode 0 with sqlstate 1004. This was caused by having an attribute defined in the model that was shorter than the database column due to differences between the same column in the Host Ency and Client/Server Ency.

Because Gen does not check the sqlstate (only the sqlcode), without a multi-row fetch, you will never see the warning, but with a multi-row fetch, because the generated code does not handle the +354, the application terminates with a runtime error. Unfortunately you cannot tell what the cause was without amending the generated code to add in the GET DIAGNOSTICS statements.

So far we have been working through the warnings and eliminating them, but we are also considering a post processor for the generated code to add in the diagnostics to make debugging easier, or to ignore the +354 sqlcode if there are only warnings.

The second issue is described in the next posting.

Tuesday, 4 May 2010

Beyond Compare

As a tools developer for CA Gen, the fact that we also develop our tools with CA Gen has meant that we tend to 'build not buy'. In other words, when we find the need for some additional tools support as part of our own Gen development projects, we enhance our own tools. This approach then leads to extra functionality in the products that is almost always useful to our customers as well.

However there are certain 3rd party tools and utilities that we have purchased runtime licences for instead of building ourselves. Examples include the diagramming OCX that we have used for creating the Object Structure, Life-Cycle and Model Architecture diagrams in GuardIEn, the ftp/sftp utilities and the file compare tool.

For the file compare tool, we have used the freely distributable WinDiff tool with the option for customers to replace this with their own favourite product. However Windiff is a fairly basic tool, and some time ago we replaced this for internal use with Beyond Compare 3 (BC3).

We like BC3 so much, that for the 8.0 release of our products, we have purchased additional licences to be able to distribute BC3 to our customers as well.

Wednesday, 28 April 2010

Gen r8 and z/OS

The beta test for Gen r8 has now ended and we are finishing off the changes made to our products to support r8. We will be launching this release 8.0 in early May to coincide with the general availability of Gen 8.0.

The most significant changes that affected us were on the z/OS platform. The introduction of z/OS Libraries (OPSLIBs for z/OS), dynamic RI triggers and changes to the way that applications are linked affected many aspects of GuardIEn, especially in the areas of impact analysis and code construction.

In previous releases of Gen, the link-edit control cards were created from the Gen skeletons using file tailoring and then a single INCLUDE was added for the dialog manager, with the remaining modules included using autocall.

With Gen r8, the format of the link-edit control cards has changed. Instead of using autocall to resolve called action blocks, each non-compatibility Gen module referenced in the load module has a specific INCLUDE APPLOAD or IMPORT statement.

This means that if you create the link-edit control cards outside of Gen, you will have to replicate this approach. A new business system library is available which Gen populates with the link-edit control cards (called binder control cards using the new IBM terminology), so these are available if required.

Another change is that dynamic action blocks that are packaged into a z/OS Lib are now called using a literal instead of a variable, for example, in Gen r7 and previously, a call to action block AAAA would be implemented as:

09 AAAA-ID PIC X(8) VALUE 'AAAA'.
...

CALL AAAA-ID

In Gen r8, if AAAA is included in a z/LIB, this is now

CALL 'AAAA'

If you installing code using multiple models, then the use of external action block and external system load libraries must be carefully considered to ensure that dynamic action blocks packaged into a z/LIB are not found via autocall since the binder will then statically link the object modules instead of resolving them using the IMPORT statement.

Wednesday, 13 January 2010

Mapping Group Views

Gen allows you to view map group views with differing cardinalities on USE statements and dialog flows if the receiving view has a higher cardinality than the sending view.

However the view match remains intact even if you then subsequently change the cardinality of the sending view to a value that is greater than the receiving view, so you could end up with a sending view that is larger than the receiving view, which could then cause unexpected results, like loss of data without a runtime error. In this case, you could not establish the view match again, but the existing view match is still 'valid' in the model.

If the group view sizes were initially the same, the developer might not think that they need to add in any extra validation logic, but a subsequent change to one of the group views might then cause problems.

A new check in VerifIEr allows a quick check for differing group view cardinalities with a warning if they differ but are valid and an error if they differ and are invalid.

Monday, 21 December 2009

Discovering hidden errors

Since we originally developed our VerifIEr QA and code checking product over 2 years ago, the scope of checks has enlarged in some very interesting directions.

The initial checks were primarily aimed at standards enforcement, for example, object names, CBD architecture enforcement, use (or abuse) of various properties (e.g. READ properties), etc.

More recently however we have developed a number of checks for customers that are aimed at improving code quality by detecting errors that might be otherwise difficult to find.

We have run these checks on our own products (which are developed with Gen) and have been surprised at the number of potential errors that have been encountered. Usually the more serious errors are detected during testing, but sometimes not!

Examples of the checks that we have found especially useful include:

  1. Local views that are referenced but never set to a value, indicating that either the local views need to be populated or the code is no longer required;
  2. Hidden fields (for example fields on a GUI that are placed above the top border) that are not read-only (the user could therefore tab to the field and change its value);
  3. Export views not fully mapped to an import view on a screen or window;
  4. Checking the tab sequence for GUI windows and dialog boxes;

These sorts of errors are notoriously difficult to spot via code inspection and can also be missed during testing.

Thursday, 22 October 2009

Unused and Duplicate Prompts

We have been developing our products for close to 20 years now and one of the consequences has been that we have found quite a few unused and duplicate prompts in the models. We also have a multiple model architecture and a policy of migrating the entire data model to each of the development models. This results in all of the prompts also being duplicated (and unused) in all of the models.

Apart from having a large number of redundant prompts in the models, it can also make the selection of prompts in the window/screen designer tedious because the large lists of unused and duplicate prompts makes locating the desired prompt harder.

There is a Gen function in the toolset to delete unused prompts, but this requires the model to be downloaded, and ours are too big. It will also not get rid of duplicates.

We have therefore written a new genIE function to both delete unused and consolidate duplicate prompts.

The result is faster downloads because you are not downloading extra prompts and also easier selection of existing prompts in the window/screen designer.

Tuesday, 14 July 2009

Almost like having a new machine

We recently upgraded our anti-virus software to the latest 2010 release and it was immediately noticeable how much slower our machines were, especially our heavily used CSE machine. We also found the desktops to be much slower, and so the new release was de-installed and the older (2009) release re-installed. It still took up to 25% of the CPU though, and so we decided to try some alternatives. After a bit of research, we selected one of the other leading products to trial. Both desktops and the CSE run much faster and it is like having a new machine and so the upgrade can be delayed for a while!

Monday, 29 June 2009

In praise of integration

Having spent over 20 years developing our products using Gen, it is clear that one of the main benefits is the low cost of maintaining applications developed with Gen. I think that there are many reasons for this, some of which are due to inherent features of Gen and others derive from the methods and standards used by the development project. In my view, a key feature of Gen that contributes to the low cost of maintenance is the integrated nature of the analysis and design tools.

The early marketing of IEF (as Gen was called in the early days) emphasised the integrated nature of the product and IEF was called an i-CASE (integrated Computer Aided Software Engineering) tool to distinguish it from point solution CASE tools. Unfortunately many i-CASE tools were nothing of the sort and few if any came close to delivering the 100% code generation and great success of Gen. This resulted in the CASE / i-CASE market getting a bad name, through little fault of IEF.

However. having chosen the best integrated development tool, shouldn’t a Gen project maximise the benefits of that integration? The trend to only use Gen for the server and batch parts of a project concerns me. Whilst there are undoubtedly situations where Gen is not the best choice for developing the user interface, I suspect that there are others where the choice not to use Gen for the front-end has been a mistake due to the resulting increased cost of development and maintenance.

When the user interface is developed with a separate tool, the interface between the presentation layer (client) and the business logic (server) has to become much more formalised at an early stage in the life-cycle, especially when the client and server parts are developed by separate teams. Even if you are using CBD/SOA or some other development approach that advocates stable, published interfaces, there are still many situations when a rapid, iterative approach to development will benefit from having one person develop the client and its closely coupled servers at the same time and with the same tool.

The goal of 100% code generation and integrated nature of Gen means that there are boundaries to the product's capabilities. Whilst there are features that allow external code (external action blocks, OCX controls, etc.), there are still limitations on what can be accomplished with Gen. The perceived weakness of Gen for developing sophisticated user interfaces has made some Gen projects avoid Gen for the user interface or presentation layer of an application.

A few years ago, I was visiting a long standing Gen user who had used Gen very successfully to develop 3270 and batch applications. I demonstrated GuardIEn to the development manager, and then we went for lunch. He explained that they were now moving to client/server but had decided not to use Gen for the front end because they did not think that you could develop a good front end. I asked him what they were looking for, and his response was that they would like to be able to develop something that looked like GuardIEn! He did not realise that GuardIEn was a Gen developed application with the user interface created using the same Gen design tools that they had decided were inappropriate.

Now, to achieve the sophisticated look and feel of our products with Gen has not been easy. We have had to develop an add-on tool (IETeGUI) and learn how best to achieve the desired results. But is this not the case with any tool? Don’t just take the product out of the box and expect to develop a very sophisticated user interface immediately. It needs quite a bit more work than that – probably more than you would expect. It is not easy to create a great user interface with Gen, but it can and has been done, and in my view, the extra effort is more than compensated for by the significant reduction in development and maintenance effort through the use of an integrated tool with 100% code generation.

Monday, 15 June 2009

Dog Food or Champagne?

There is a saying about eating your own dog food, or the more pleasant version, drinking your own champagne. The point is that if you really believe in your own product, then you would use it yourself, and therefore I prefer the dog food analogy since you would only eat your own dog food if it was really palatable, whereas you might be prepared to drink anything that is alcoholic!

Anyway, getting back to the main point, if you are a software developer and you can use your own products, then you have a big incentive to improve them for your own benefit. This is why I was really pleased when I heard that CA would be using Gen within their development team as part of the Mainframe 2.0 initiative.

Because we develop our products with Gen, we are also able to use our own tools as well, and this positive feedback loop has resulted in many improvements and enhancements to make the ‘dog food’ as palatable as possible. An example of this is in the area of version control.

One of the most useful tools in the armoury of a developer is the ability to see what has changed in the source code. The ability to see the what, why, when and who (what has changed, why was it changed, when was the change made and who made it) makes diagnosing a problem much easier. With Gen, a single model can only contain a single version of an object, so if the object is changed, you lose the ability to see what it looked like the moment before the change, unless you have saved the previous version somehow (via migration, model copy, etc.).

Since it is impractical to save the previous version every time a change is made, often the diagnosis of a problem is made unnecessarily hard because this useful information is not available. For example, a user reports a problem in the test system that they noticed a few days ago. In the meantime, the model has been changed and you are therefore unable to see what the changes were (only that the object was last changed on a specific date/time). If you cannot reproduce the problem, you cannot then tell if the problem has been fixed, or if your test case does not properly test for the issue.

We have found the ‘minor versions’ feature of GuardIEn especially useful. This allows you to track every change made to a Gen object and see the who changed it, when it was changed and what was changed (down to properties and individual action diagram statements). When linked to a GuardIEn Change Request, you can also see why it was changed and what other objects were also affected by the same change.

I know that we would say this anyway, but we have found this capability to be invaluable in the on-going maintenance of our products.