banner image for William G. Smith & Associates

MODELING AND IMPLEMENTING FUNCTIONALLY-NORMALIZED PROCESSING TRANSACTIONS

Distintegrated, massively-redundant data creates a massively redundant, and extremely complex amount of application program code. As enterprises use data modeling techniques to build shared, subject-oriented, intelligently-normalized and flexible databases, complementary techniques for designing the automated processes (transactions) have become mandatory. The same principles which make subject normalization of data so desirable apply equally well to the design of functionally-normalized automated transactions. Normalizing data ensures that the same data is never stored twice (unless intentionally necessary); functional normalization of transactions ensures that the same function is never programmed twice, saving time, effort and cost. It further ensures that when the enterprise desires to change its rules about how something is done, the required re-programming load is unbelieveably quick and inexpensive compared to current dis-integrated, redundant systems. Using these modeling techniques, it is possible to quickly design and build sharable, stable, flexible, non-redundant, complementary, and provably correct automated transactions which represent distinct logical units of work in the business.

This form of modeling is based on the simple "CRUD" principle: Create, Retrieve, Update, and Delete - the four fundamental actions that a programmed transaction can perform on a data structure. Once a Conceptual Data Model is complete, abstracting the necessary Create, Update, and Delete transactions becomes almost trivially quick and easy, and can save substantial amounts of analytical time and effort, and then substantial time and effort in programming and testing. Consider: if an enterprise manages to change its ways, so that it builds a single, shared CUSTOMER database, there really only needs to be a single transaction/program which CREATES a new CUSTOMER in the database. The logic which must be applied to edit and validate the incoming data about the new CUSTOMER must be exactly the same logic (and should be shared by) the UPDATE transactions which will be used to change data about a CUSTOMER when necessary. There only needs to be a single DELETE transaction which will be used to remove a CUSTOMER from the database, and it of course will do whatever logical checks are necessary to ensure that the business rules about retaining a CUSTOMER in the database are properly applied before the DELETE operation can be completed. When an enterprise has CUSTOMER data stored in scores, or hundreds of different and usually disintegrated data stores, the programming to properly Create, Update, and Delete a CUSTOMER from any one of those data stores is invariably redundant, and usually extremely inconsistent; this is the source of most data quality problems, and it easily resolved when single-source, shared data structures are modeled, designed, and built. In other words, cleaning up the redundant, inconsistent data mess also cleans up the redundant, inconsistent application code mess! CRUD-oriented transactions are readily implemented using modern programming languages and client/server divisions of labor. This seminar is the automated-process complement to the Conceptual Data Modeling and Logical/Physical Data Modeling seminars, which are strongly recommended prerequisites. (Also note that we offer a combined seminar (Integrated Process and Data Modeling) which presents data and process modeling in the general-to-specific, but interleaved sequence which is typical in real-life practice.)

 
TOPICAL OUTLINE
 

DURATION: 5 days

TARGETED AUDIENCES (recommended maximum number of attendees - 25)

PREREQUISITES:


RETURN TO SEMINARS PAGE

RETURN TO TOP OF PAGE

© 2013 WILLIAM G. SMITH