Special Topics - Miscellaneous - Pragmatic Enterprise Architecture (2014) - Strategies to Transform Information Systems in the Era of Big Data

Pragmatic Enterprise Architecture (2014)
Strategies to Transform Information Systems in the Era of Big Data

PART VIII Miscellaneous

Abstract

This part is a collection of special topics that may resonate better now that the reader has experienced the journey of the prior sections, and that augment the tools that the reader has at their disposal, such as a better understanding and comprehensive inventory of nonfunctional requirements. This section also offers the reader a summary and a set of conclusions that tie back to the theme of the book which is actually all about the skills needed in large organizations that relies upon automation while keeping it from running overly inefficient or even out of control.

Keywords

research companies

Benjamin Franklin

deliver value

NFRs

nonfunctional requirements

business rules

nonbusiness-specific requirement categories

accessibility

adaptability

affordability

architectural compatibility

audit ability

availability

backup and restorability

capacity

capacity predictability

certify ability

change control ability

completeness

compliance ability

configurability

configuration management ability

cost predictability

data cleanse ability

data governance alignment

data ingest ability

data mask ability

data source ability

debug ability

deliverability

dependability

dependencies on external systems

deploy ability

documentable

disaster recoverability

distribute ability

efficiency

effectiveness

emotional factors

escrow ability

exploitability

extensibility

failover ability

fault tolerance

flexibility

hardware compatibility

handle ability

integration ability

internationalization

interoperability

legal

licensing-infringement

patent-infringement avoid ability

likeability

maintainability

network topology autonomy

open source ability

operability

performance

performance predictability

platform compatibility

price negotiability

privacy

portability

quality

recoverability

reliability

report ability

requirements traceability

resilience

resource consumption

redictability

resource constraint tolerance

response time for support

reusability

robustness

scalability

security

self-serviceability of end users

software backward compatibility

software forward compatibility

stability

staffing and skill availability

standards compliance

supportability

testability

total cost of ownership

raceability

trainability

transaction velocity

usability

verifiability

buzzwords

object relational impedance mismatch

encapsulation

accessibility

interfaces

data type differences

structural and integrity differences

transactional differences

pragmatic enterprise architecture

economic theory

conclusion

From Good to Great

8.1 Special Topics

There are a handful of related topics that simply do not fit neatly in the main sections of this book. One such topic is that of vendors and consulting firms, many of them involving particular big name vendors and big expensive products that come with big expensive consulting engagements. There is a significant difference between vendors providing value to customers versus maximizing their own profits for the quarter.

How many times have we seen vendors delivering software packages that fall out of use, or that place high priced individuals on a project that have little experience, or that deliver presentation decks that instantly became shelf-ware costing hundreds of thousands of dollars?

That said, there are a small number of vendors in the marketplace, some large, some small that do understand the problem, and that possess the knowledge and experience to address the challenges. These few vendors are the ones that have that attitude established by Benjamin Franklin, which is to deliver value to the customer the old fashioned way, which is to under sell and over deliver.

One way to find the great vendors out there and to protect your company from the others is to leverage your first and best line of defense. This would be by using your smart and lean enterprise architecture practice, not major research companies with vendor relationships.

8.1.1 Nonfunctional Requirements

8.1.1.1 Background

The concept of functional versus nonfunctional requirements has been the subject of a debate that began in the 1980s. The notion was that nonfunctional requirements were different than functional requirements because they represented concepts regarding the properties of a system that stemmed from desirable design principles. As such, they were easily separated from functional requirements simply because they were not specific, expressed as general characteristics of a system in nonquantifiable ways that formed qualities of a system that were good to have, such as being secure, reliable, and cost effective.

The tricky part is that these same qualities can be and often are expressed in precise terms, such as:

- Users of the system shall have their user names authenticated by a common framework administered by the business compliance department to determine their access rights to each system capability. This statement would then be decomposed further to the following:

- The first function to be performed is to acquire the unique user identifier from the logon function and to pass it to the LDAP directory to retrieve the department identifier and the list of user roles.

- The department and user roles would then be validated against the list of business data glossary data points that the user requested, where data points that the user is unauthorized to view would be masked using the masking template associated with the particular business data glossary data point immediately prior to being rendered to the data visualization layer.

- The business rules associated with data masking template requirements can be found in the enterprise data security standards document under the section named ‘Data Masking Template’ requirements.

- Databases of the system shall have physical protection as well as system access protection.

- Physical protection of databases shall be provided by being placed on servers that are protected physically within a secured area that is fenced in within the data center.

- System access protection shall be provided by network security to ensure that only database administrators and authorized applications have access to the databases of the application on an as needed basis only for the period required for them to perform previously approved activities.

To many, the specificity of these requirements then caused nonfunctional requirements to suddenly be categorized as functional requirements. However, I think we can agree that there is probably something wrong with a taxonomy for categorizing requirements that causes the categorization to change simply as a result of how much detail was used in stating the requirement. After all, what is the threshold of detail that can be added to a requirement before it changes from a nonfunctional requirement to a functional requirement?

There is a much easier way however to view the topic of functional versus nonfunctional requirements and requirements in general.

To begin, for any requirements to be implementable, as well as traceable for that matter, requirements have to be specific and detailed. Requirements that are vague cannot be implemented; at least they would have to be if the outcome is to be expected and repeatable.

Sometimes, vague requirements are understood to simply be preferences which can be easily lost or ignored as their exact meaning is up to the interpretation of the individual that reads them. Only once they are stated in specific terms involving inputs and outputs can they receive treatment as requirements.

This is not to say that all specifically stated requirements can be implemented, but at least now, you have a chance that there may be a way to implement them and that no matter who reads them each person would interpret them in a consistent manner.

8.1.1.2 Business Rules

Now that we have established what “requirements” are and are not, they must contain business rules, where the business rules can be specific to: the particular line of business (e.g., mortgage origination), industry (e.g., retail financial services), or business in general (e.g., retail business).

If a requirement contains one or more business rules that are particular to the specific line of business, industry, or business in general, then let us declare that it is probably safe to classify such as “functional requirements.” However, what constitutes a business rule is worth mentioning.

A business rule can only exist at a detail level where specific data are present and/or specific events have occurred and there are clear activities to be performed based upon the presence or content of data and or events.

Depending upon the type of system, the specific data or event can be associated with the external environment or it can be internal within the system.

8.1.1.3 Nonfunctional Requirements

Nonfunctional requirements also must contain business rules. However, they belong to a different set of categories. Instead of business rules being pertinent to a specific line of business, industry, or business, they belong to a category of nonfunctional requirements or nonbusiness-specific requirement categories.

These nonbusiness-specific requirement categories are not requirements; they are only categories (aka “ilities”) within which nonbusiness-specific requirements may be organized.

8.1.1.4 Nonbusiness-Specific Requirement Categories

Nonfunctional requirement categories may include, but are not limited to:

Accessibility—refers to the relative ease with which end users of the system can make contact and get into the system. It can be performed from each stakeholder location and end-user computing platforms. Accessibility testing can be achieved using automation that would reside on each end-user computing platform, which can then ping the system to test its ability to access the system.

Adaptability—refers to the degree to which the system components use open standards that make it easy for its components to be interchangeable. It can be performed by switching out interchangeable components with analogous capabilities.

Affordability—refers to the cost of the system in terms of money, time, and personnel resources relative to the other priorities of the organization. Unless frameworks exist to act as accelerators for incorporating nonbusiness requirements, the inclusion of additional nonbusiness requirements tends to act in direct opposition to affordability.

Architectural compatibility—refers to the degree to which the system is compliant with enterprise architecture standards and frameworks. Unless there are frameworks in place to measure this, it can be performed by engaging a subject matter expert in enterprise architecture to assess the system for its compliance with the pertinent architectural standards.

Audit ability—refers to the degree to which the system can be evaluated for properly performing its activities and the accuracy with which it performed those activities. It can be performed by incorporating metrics within each system capability to provide the appropriate level of record keeping for regular operation or for testing purposes.

Availability—refers to the portion of the time that the entire system can be considered available and functional across all stakeholders and their locations. Testing of availability can be performed from each stakeholder location and end-user computing platforms. Availability testing can be achieved using automation that would track all attempts to invoke capabilities of the system which can be compared with a system log of requests to determine if any requests for capabilities arrived or were not supported to completion.

Backup and restorability—refers to the ability to establish a backup of the entire system and apply it on demand when restoration is required. Testing of back and restorability can be accomplished by data services subject matter experts who would test regular backups and invoking the restoration process.

Capacity—is the knowledge of system limitations to support each use case and combinations given various system workloads. Testing capacity is conducted by applying increased workloads for a given computing configuration to determine the maximum level of work that can be performed before service-level agreements can no longer be met.

Capacity predictability—is the ability to predict system limitations to support each use case and combinations given various system workloads using various computing configurations. It can be performed during system capacity testing usually in the form of a formula involving resource availability and consumption rates of computing platforms.

Certify ability—is the ability to confirm that the system can meet its business-specific and nonbusiness-specific requirements over a period of time through periods of full deployment from the perspective of all stakeholders. Unless there are frameworks in place to measure this, it can be assessed by engaging a competent subject matter expert to confirm that the system can meet the requirements of all stakeholders over a period of time.

Change control ability—is the ability to appropriately manage changes to and versions of each and every component of the system through the life cycle of the system. It can be assessed by engaging a competent subject matter expert to confirm that the scope of change control is appropriate and that the tools and procedures are appropriately managed by the components of the system as they change and move through the life cycle.

Completeness—is the degree to which the capabilities provided each end user fully support their requirements such that additional manual activities or use of other automation systems are no longer required. It can be assessed by engaging a competent subject matter expert to determine the degree to which the scope of capabilities is complete.

Compliance ability—is the degree to which rules and policies from business, legal, regulatory, human resource, and IT compliance are adhered to. Unless there are frameworks in place to measure this, it can be assessed by engaging a competent subject matter expert to review the rules and policies of compliance and then determine the degree to which compliance can be traced through architecture, design, and implementation of the system.

Configurability—is the proportion of the system that is easily configurable from a parameter and data-driven perspective as opposed to requiring custom development and nonconfiguration-type customizations. Unless there are frameworks in place to measure this, it can be assessed by engaging a competent subject matter expert to review the configurable and nonconfigurable aspects of the system, such as involving the adding of application and/or database nodes, business data inputs, business data outputs, reports, and business rules into rules engines.

Configuration management ability—refers to the ability to appropriately manage collections of changes that together form a matched set or version of the system. Unless there are frameworks in place to measure this, it can be assessed by engaging a competent subject matter expert to review the scope and workflow of configuration management scope and tools and procedures through the life cycle of the system.

Cost predictability—refers to the ability to project the total cost of the various areas of system development, maintenance, operations, and eventual decommissioning. It can best be managed through cost models that are developed over time within the organization that are based upon experience involving similar systems.

Data cleanse ability—refers to the ability to cleanse or scrub the data to make it usable to a system or analytical capability.

Data governance alignment—refers to the ability of the system to support a variety of data governance-related capabilities that serve to manage the metadata of the information content of the system. It can be assessed by the end users access to a business data glossary, manage ownership of data categories, administer access rights of owned data, and the ability to identify existing reports, queries, dashboards, and analytics.

Data ingest ability—refers to the ability to ingest a file that has been received for its schema or metadata, ability to unencrypt, ability to locate the correct algorithm to decompress it, or general ability to determine its contents.

Data mask ability—refers to the ability to mask the contents of sensitive data without rendering the file useless for application and or analytical purposes.

Data source ability—refers to the ability to discover viable data sources to supply the informational needs of a system or analytical capability.

Debug ability—is the degree to which error handling is incorporated into each capability of the system to support detection, information tracking and collection, and reporting of abnormal or unexpected conditions across the system. Unless there are frameworks in place to measure this, it can be assessed by subject matter experts for comprehensiveness in error handling, process tracking and data collection for diagnostics, and reporting.

Deliverability—is the ability to supply system capabilities in an operational state to end users and stakeholders on schedule. It can be assessed by comparing scheduled and actual delivery dates of various system capabilities.

Dependability—is the degree to which capabilities delivered by the system meets expectations of the end users when the capabilities are needed. It can be assessed by periodically surveying end-user satisfaction for the system’s ability to consistently deliver services.

Dependencies on external systems—is the degree to which capabilities delivered by the system are dependent upon activities of external parties, data sources, and systems. It can be assessed by counting the parties, data sources, and systems that are required for this system to meet its objectives.

Deploy ability—is the degree to which capabilities of the system can be made available to end users across all locations and time zones with minimal additional effort. It can be measured by determining the level of additional effort required to deploy capabilities across geographic locations and time zones.

Documentable—is the degree to which activities of the system within its various areas of capabilities can be supported by documentary evidence. It can be measured using a sampling approach to confirm a set of data instances that are in various normal and abnormal states with documentary evidence from alternative sources to illustrate the accuracy of the normal and abnormal states.

Disaster recoverability—is the degree to which systems components can reside on VMware in a primary and DR data center. Disaster recoverability is enhanced when components of the system can operate in environments that provide virtualization to avoid the need of exactly replicating system DLLs and drivers across nonvirtualized operating systems.

Distribute ability—is the degree to which components of the system can be distributed across more numerous computing platforms that are connected through a network. It can be tested by moving system components across computing platforms over a network.

Efficiency—is the degree to which the system can handle workloads with small increases in resource consumption. Testing increased workloads, while measuring resource consumption using performance monitoring tools can demonstrate efficiency.

Effectiveness—is the delivery of capabilities in relation to the degree of effort where the lesser the effort, the greater the effectiveness. There are several ways to measure effectiveness, where one is to measure the level of effort prior to and after implementation of the system.

Emotional factors—is the degree to which the use of the system provides a stress-free experience to the user for accomplishing their business objectives. End-user experience relative to emotional factors can be surveyed at regular intervals during and after training.

Escrow ability—is the degree to which licensed software components are available on deposit with a third-party escrow agent as a means to protect against default of the software vendor. Confirmation from each vendor can be attained for every software version of source code for those products licensed to your organization.

Exploitability—is the degree to which stakeholders and end users can exploit the system for previously unanticipated capabilities. End-user experience of the ability of the system to provide unanticipated capabilities can be surveyed.

Extensibility—is the degree to which capabilities can be added to the system, thereby taking into consideration the potential future growth of the system which can extend its life. Unless there are frameworks in place to measure this, it can be evaluated by a subject matter expert who can assess system architecture, design, and implementation for business and technology extensibility considerations, such as use of industry standard interfaces and protocols.

Failover ability—is the degree to which the system can automatically switch over away from infrastructure components that have failed to the remaining operational infrastructure components and can be routinely tested.

Fault tolerance—is the degree to which infrastructure components have redundant capabilities that provide a way for hardware failures to be compensated for by using redundant components that are automatically routed to. It can be tested by unplugging and turning off components while they are actively being used.

Flexibility—is the degree to which the system can support additional products, product types, workflows, data sources, reports, and analytics. It can be tested by adding additional products, product types, workflows, data sources, reports, and analytics.

Hardware compatibility—is the degree to which the system can be supported on the intended hardware and operating system environment. It can be tested by installing the system on the intended hardware and operating system environment.

Handle ability—is the degree to which end users of the system can easily employ capabilities of the system without committing user errors. It can be tested by measuring error rates of end users and by conducting user surveys to assess the frequency of rework.

Integration ability—is the ability to bring together systems and subsystems into a cohesive framework that minimizes the replication of data and streamlines the transfer of data from one system to another. Unless there are frameworks in place to measure this, it can be best assessed by a subject matter expert.

Internationalization—is the degree to which the system supports alternate languages through the integration of translation software and Unicode support in limited areas as well as in national differences including compliance and regulatory differences. Unless there are frameworks in place to measure this, it can be assessed by a subject matter expert and can be tested.

Interoperability—is the ease with which the system can interface with other systems of similar and diverse computing platforms, operating system environments, and network protocols. Unless there are frameworks in place to measure this, it can be best assessed by a subject matter expert and can be tested.

Legal, licensing-infringement, patent-infringement avoid ability—is the degree to which legal, license infringement, and patent infringement can be avoided with rigorous procurement and legal standards and talent for vetting vendors and internal procedures for IT compliance. Unless there are frameworks in place to measure this, it can be assessed by engaging subject matter expertise.

Likeability—is the degree to which users like to use and work with the system. End-user experiences of the system for likeability can be surveyed at regular intervals.

Maintainability—is the ease with which defects can be isolated, defects can be corrected, new requirements can be supported, and the life of the system extended with minimal additional difficultly over time due to frameworks that enhance maintainability. Unless there are frameworks in place to measure this, it can be evaluated by a subject matter expert who can assess system architecture, design, and implementation for frameworks that minimize the additional difficulty associated with incorporating change.

Network topology autonomy—is the degree of independence that the system has upon logical and physical network topology and unless there are frameworks in place to measure this, it can be evaluated by a subject matter expert who can assess the system for dependencies.

Open source ability—is the degree to which an open source community would likely take interest in the product or framework to further enhance and support it and can be determined by engaging the process for donating software to the Apache Foundation and vendors who may wish to support it as open source.

Operability—is the degree to which the parts of the system can work together to meet the objectives of the system and can be determined through testing the parts of the system in an integration test environment for business and system requirements and business rules.

Performance—is the ability of the system to complete logical units of work within the necessary period of time for supporting various workloads and can be determined by testing the parts of the system and the system in its entirety while measuring its ability to complete work under various loads using performance monitoring tools and test beds.

Performance predictability—is the degree to which performance of the system can be predicted at various loads and can be determined by developing predictive methods and formulas that can be tested during performance testing.

Platform compatibility—is the degree to which the computing platform (e.g., hardware and operating system environment) can support the system components (e.g., licensed software, acquired software, and custom built/bespoke software). This is routinely tested.

Price negotiability—is the degree to which aspects of the system (e.g., software licenses, hardware, labor, support, and training) can be negotiated down or restructured over time. It is highly dependent upon the vendor ability and willingness to negotiate with procurement subject matter experts.

Privacy—is the degree to which data privacy is architected into the system from the perspective of the workflow of the various roles of individuals and the architecture of the system. Building privacy into the architecture from the beginning is usually the only way to achieve an effective level of privacy.

Portability—is the degree to which a system can be replicated or moved to an alternate hardware and/or operating system environment with as little or no effort in application coding, or simple configuration parameter changes. Some technologies are more portable than others which can be determined through testing.

Quality—in this context refers to faults within the system, including faults discovered prior to delivery, faults delivered, and faults discovered after delivery in relation to the scope of the system. Quality can be many ways, such as based upon the number of faults per functional component, lines of code, data points, and user interfaces. Better architectures and designs will minimize the number of quality related issues.

Recoverability—is the ability to meet recovery time objectives, recovery point objectives, and overall mean time to recovery. Recoverability for Big Data is particularly concerning, as while traditional Big Data products support recoverability, their Hadoop-based counterparts do not. Backups of data, metadata, applications, and configurations are an important part of a Hadoop ecosystem as many have the misconception that Hadoop replication of data automatically protects one from data loss.

In fact, simple human error can wipe out terabytes of data in a matter of seconds (e.g., creating a Hive table in a Hadoop folder that is already populated with production data, only fs shell copies deleted data to the Trash Server, whereas programmatic deletes do not employ the Hadoop Trash Server, HDFS upgrades involving disk layout changes pose a high risk to existing data); backups in Hadoop must also have full consistency by backing up components that form a configuration together (e.g., data and metadata must be backed up in the same flow). Aside from human error, a severe electromagnetic pulse (e.g., from solar activity) can wipe clean petabytes of a Hadoop system above ground.

Reliability—is the degree to which the system remains consistently available providing services to end users without interruptions of any type, such as a failure or denial of service resulting from any unplanned condition, including human error, hardware failure, software failure, vandalism, and attack (e.g., distributed denial of service). Depending upon its criticality, a system can detect, protect against, and capture metrics on most any type of potential failure or attack. Reliability is often measured in terms of mean time between failures.

To cite just one example, most variations of UNIX offer a greater degree of reliability than Linux to such an extent that cost differences and standards should be momentarily considered.

Report ability—is the ability to report business data as well as normal and abnormal conditions to the appropriate system stakeholders, administrators, and support staff in a controlled manner with the appropriate set of corresponding data to diagnose and remediate the condition. Unless there are frameworks in place to measure this, subject matter experts can be used to assess the degree to which the system supports report ability.

Requirements traceability—is the degree to which business and nonbusiness functional requirements can be traced into the architecture, design, and implementation of the system. Unless there are frameworks in place to measure this to test the ability to track sample sets of requirements into the design and implementation components, subject matter experts can be used to assess the degree to which the system supports requirements traceability encompassing software lines of code, configuration settings, rules engine rules, and complex event processing rules.

Resilience—is the degree to which an acceptable level of service can be maintained in the face of failures by minimizing the elapse times in which the system is unavailable to end users. For some systems, frequent outages are not a problem as the system bounces back quickly enough just causing a momentary delay where services were not available. Unless there are frameworks in place to measure this, subject matter experts can be used to assess resiliency of a system.

Resource consumption predictability—is the degree to which the resources consumed can be predicted through various workloads of the system. Unless there are successful frameworks and methodologies in place to measure this, subject matter experts can be used to assess resource consumption predictability of a system.

Resource constraint tolerance—is the degree to which the system can provide an acceptable level of service when various resources (e.g., CPU, memory, paging files, disk space, and network capacity) become constrained. This can be determined through the development of service-level agreements and testing.

Response time for support—is the rate at which levels 1, 2, and 3 issues reported to the helpdesk are resolved. Helpdesks typically have frameworks to report these metrics although there can be significant flaws in these metrics when incidents have to be reported repeatedly without linkages to their original ticket.

Reusability—must meet multiple criteria beginning with the ability to develop components that are highly reusable, once created are easily locatable, and once located are incorporated without customization affecting prior usage. Developing components that are highly reusable is often poorly understood causing repositories to house endless numbers of components that are not reusable, not easily locatable, and if found not usable without customization. This nonbusiness requirement category requires a highly competent subject matter expert with a strong foundation in both information and application architecture.

Robustness—refers to the degree to which abnormalities of inputs can be gracefully handled to avoid unexpected results or failures. This can be tested and measured with strong test planning of expected results involving a wide variety of use cases that provide handling of normal and abnormal data inputs.

Scalability—is the degree with which additional capacity can be supported through simple actions, such as the addition of commodity or proprietary servers/nodes, blades, and memory cards. This can be tested and measured by introducing additional work or data velocity and then addressing it with additional hardware.

Security—is the degree to which unauthorized access to services and/or data, including the ability to disseminate data, can be appropriately protected. Considerations for security must be provided for in test planning in systems that house sensitive data or that provide services that are necessarily restricted.

Self-serviceability of end users—refers to the degree to which the system supports self-service to end users in a manner that empowers end users to independently meet their own needs. This is an architectural framework that establishes that if IT must be engaged for a service, then that service is provided only once with the provision that it persists so that the same activities (e.g., data analysis, data sourcing, data cleansing, data standardization, data reformatting, data restructuring, and data integration) do not have to be performed again. Unless there are frameworks in place to measure this, subject matter experts can be used to assess self-serviceability of end users.

Software backward compatibility—is the degree to which older standards for interfaces can be supported with newer versions of the system. This can be tested as each new version of the system emerges.

Software forward compatibility—is the degree to which newer standards for interfaces can be supported with older versions of the system. This can be tested as each new interface standard emerges.

Stability—is the degree to which components of the system are stable over time from the perspective that they will not need to be changed to meet additional or modified requirements. This can be developed as an architectural style that emphasizes rules engine-driven system within each aspect of the system framework that would otherwise be susceptible to instability. Unless there are frameworks in place to measure this, subject matter experts familiar with modern architectural frameworks for systems can be used to assess system stability.

Staffing and skill availability—is the degree to which an ample number of resources with the appropriate experience are available for the periods and locations required. Personnel procurement can be conducted in such a way as to gather useful metrics on this topic, although a subject matter expert is generally required to identify and set up the appropriate frameworks.

Standards compliance—is the degree to which applicable standards are represented in standards and frameworks, and can be identified and complied with while incorporating traceability. Without traceability, standards compliance will become a fleeting concept. Unless there are successful frameworks in place to measure this, subject matter experts can be used to assess the degree to which standards compliance is delivered.

Supportability—is the degree to which internal or external resources are able to provide competent support for the system on a timely basis. Unless there are successful frameworks in place to measure this, subject matter experts can be used to assess the supportability of a system.

Testability—is the degree to which testing has been incorporated into the architecture, design, and implementation of the system to address expected and unexpected abnormalities in a controlled and traceable way. Unless there are successful frameworks in place to measure this, subject matter experts can be used to assess the testability of a system.

Total cost of ownership—related to affordability, total cost of ownership is the degree to which the various architectural areas of a system have undergone a thorough return on investment (ROI) analysis.

These include the costs, benefits, and risks associated with deploying:

- applications and decommissioning or reallocating existing applications

- infrastructure and decommissioning or reallocating existing infrastructure

- personnel and decommissioning or reallocating existing personnel

- hosting services and or setup as well as decommissioning existing hosting services

- operational procedures and decommissioning existing operational procedures

- business capabilities and decommissioning existing ones

Unless there are successful frameworks in place to measure this, subject matter experts can be used to determine the architectural ROI of a system from a comprehensive enterprise architecture perspective.

Traceability—is a synonym of requirements traceability.

Trainability—refers to the level of effort required to train various types of end users to use the system effectively for each type of use case associated with their respective role. Methods to enhance trainability include usage tips that can be built into the system to facilitate rapid learning or a refresh of training. Training subject matter experts can enhance and assess the trainability factors of a system.

Transaction velocity—is the degree to which the system can support bursts as well as sustained periods of increased loads of a system and its various components. Unless there are successful frameworks in place to measure this, thorough testing and/or subject matter experts can be used to assess the ability of the system to support bursts as well as sustained periods of increased loads of a system.

Usability—is the degree to which the system supports each type of end user using ideal human interfaces to communicate with the system including the ability to understand its user interface and outputs.

Verifiability—is the degree to which the outputs of the system can be independently confirmed as being accurate and timely.

8.1.2 Nonsensical Buzzwords

As with any taxonomy, there are bound to be terms that emerge that misrepresent information and mislead others; most of them are accidental from individuals that see one side of an issue, but lack the experience to have encountered the other sides of the issue that help put it in perspective. We see this in books and commonly on the Web.

While it is neither practical nor possible for that matter to identify them all, we will share one of our favorites.

My recommendation is to ask lots of questions to learn what any buzzword means. If you don’t get an explanation that makes it crystal clear, keep asking questions. The others in the room probably have no idea what the buzzword means either.

8.1.2.1 Object Relational Impedance Mismatch

Entity relationship diagrams were in use nearly a decade before IBM announced their first relational database management system. Entity relationship diagrams were routinely used with:

- “hierarchical databases” (e.g., DL/I and IMS),

- “inverted list databases” (e.g., Adabas), and

- “network databases” (e.g., IDMS).

The point here is that entity relationship diagrams provide a basic method with which to depict collections of data points and the relationships that those collections have with one another.

It is therefore amusing how often one can find statements on the Web, including in Wikipedia, that state that entity relationship diagrams can only depict designs for relational databases. But that said, it gets even better.

Now that the reader is now knowledgeable in many of the differences between information systems and control systems, it is easy to understand how object-oriented paradigms originated with control systems, and then became adapted to information systems.

Yes, the architectural foundation between the two paradigms is different, but that’s only because there are no tangible things that you can consistently touch in an information system.

Stable collections of data points within information systems are “objects” around which the application may be architected, as with collections of data that are identified within a logical data architecture. This goes down to the smallest collection of data points for which there are many synonyms, which include:

- record,

- tuple,

- entity,

- table, and

- object.

A conceptual or logical data model, as represented by an entity relationship diagram, is suited to model the data, regardless of what anyone decides to call the collections of data points. In other words, relational has nothing to do with it.

Now that there are a few generations of developers that only know object-oriented and relational, who have seen the differences between object-oriented control systems and relational database-oriented information systems, they have coined a new term called, “object relational impedance mismatch.”

The following are examples of what has been used as justification for the existence of “object relational impedance mismatch.”

Encapsulation: Object-oriented programming languages (e.g., Ada) use concepts to hide functionality and its associated data into the architecture of the application. However, this reason speaks to application, not database architecture.

Accessibility: Public data versus private data, as determined by the architecture of the application, are introduced as additional metadata, which are impertinent to data models.

Interfaces: Objects are said to have interfaces, which simply confuses interfaces that exist between modules of applications with data objects.

Data type differences: Object-oriented databases support the use of pointers, whereas relational does not. From the perspective of database management system architectures, the architectures that support pointers include hierarchical, network, and object oriented, whereas inverted list, relational, and columnar do not.

Structural and integrity differences: Objects can be composed of other objects. Entity relationship diagrams support this construct as well.

Transactional differences: The scope of a transaction as a unit of work varies greatly with that of relational transactions. This is simply an observation of one of the differences between control systems and information systems. What does “transaction” even mean when you are flying a B2 Stealth bomber, and if the transaction rolls back, does that make the plane land backwards where it took off from?

Okay, I can predict the e-mails that I am going to receive on this last one, but you have to inject a little fun into everything you do.

8.1.3 Pragmatic Enterprise Architecture

What makes enterprise architecture pragmatic?

First, it is important to know what enterprise architecture is. It is not solution architecture for the enterprise, where without frameworks one individual is supposed to provide specialized technology advice across business and IT any more than a physician is supposed to provide specialized medical advice across the townsfolk. Rule number one is that EA is not a GP practice, as that would rapidly lead to architectural malpractice.

Just as physicians who specialize in various highly technical areas and their specialized staff support a medical center, enterprise architects provide expertise and services to business leaders and solution architects, who in turn deliver care directly to the home of an application team, or set of application teams.

One of the most valuable aspects of having specialists co-located together in a medical center is that they are better able to collaborate. Similarly, once enterprise architects collaborate with other enterprise architects of different specialties, they discover the synergies that exist and recognize new ways of looking at issues and approaching them, perhaps with much greater effectiveness.

What may be most challenging is if the local tribe has never seen a medical center before, it is likely that they will not understand what a medical center is, what it does, how it works, and what its value proposition is. This issue is not that uncommon when it comes to growing organizations with an emerging need for an EA capability. To such an organization, the role of a true EA capability will sound theoretical, where it is action that they want as if action could be supported by one general practitioner given the role of “chief enterprise architect.”

The less likely challenge is that the growing organization knows what an EA practice is, they realize that they need the services of a mature enterprise architecture practice, but don’t have one yet and need to get one. Normally, the folks in small towns go to a nearby city to get the services and expertise they need as they need them. Hence, the city medical center supports the city plus all of its neighboring towns for any specialty services that a GP would not be able to address.

Similarly, enterprise architecture can be supported by a consulting firm that specializes in the various architectural disciplines that comprise EA, however few of these consulting firms presently exist, and few of those have the breadth of skills that would be tuned to meet the specific needs of the large organization. Enterprise architecture practices are tuned to the organization just as medical centers establish the skills most commonly needed by the population in their geographic area, whether that includes specialization for snake bites, tick bites, altitude sickness, or hypothermia.

Other important dimensions of what constitutes enterprise architecture are that as a technology medical center it provides a significant amount of preventive care. There are numerous known illnesses that culminate from certain inadvisable technology practices, and EA can help the organization establish the healthy behaviors that prevent these expensive and complex automation diseases.

Most importantly, however, pragmatic enterprise architecture is opportunistic. As issues arise across a large organization, or as awareness of issues emerges due to an insightful enterprise architect or senior manager, it presents an opportunity to establish an architectural discipline, its frameworks, and its assorted collateral to help guide solution architects going forward. As architectural disciplines get built out, their specialized personnel are either delivering a good ROI, or they are not. As in any business, you reduce overhead as you find ways of supporting the capabilities required using alternate means.

Pragmatic enterprise architecture as such is essentially a pragmatic medical center that provides services that reduce technology costs in personnel, software licenses, and computing hardware and infrastructure acquisition. Once an organization is mature and knows how to avoid unnecessary technology complexity and costs, then fewer or different enterprise architects can be considered.

8.1.4 Summary

Some people actually believe that there are simply a finite number of jobs in the marketplace, and if one big company doesn’t employ people, then lots of small companies will. This version of economic theory is not only wrong, but is reckless.

Companies, especially those that can count themselves as among the largest enterprises, are systems that employ tens of thousands of individuals on their payrolls, and additional tens of thousands more are gainfully employed due to the vast purchases of products, services, and equipment to make the company run. And then there are thousands more that are employed in government positions and thousands more that are assisted with foreign aid, not to mention the financial support upon which thousands of retired shareholders depend.

While few realize it, and fewer admit it, management and employees of all companies bear an important responsibility to the many families that rely upon the continued success of large enterprises for their livelihood. This is just the beginning of why success of large global conglomerates is so important, and as enterprise architects, you can make a meaningful difference to their continued success.

8.1.5 Conclusion

It was a pleasure writing this book; the first draft completed while vacationing at a friend’s beach house on the Dutch side of Sint Maarten, overlooking calm blue waters of the Caribbean, while taking advantage of the unusually affordable French wine that is available on this vividly international island.

The process of putting thought to paper has caused me to become even more aware of the value of the conceptual EA framework that I have attempted to communicate. While no individual concept may be earth shattering, together they are like raindrops that can cause a flood of improvement across any large organization.

I definitely look forward to collaborating again on my next book. With luck this one is the first in a series as we build upon the material in this book.

Some people ask us, “What is our measure of success?”

One view is that the measure of success is whether we can fuel a trend for business and executive management to retake control of the enterprise, so that the many technologies are harnessed wisely instead of them taking on a life of their own.

It is truly our belief that the better executive management understands the issues that plague IT, that our largest organizations can sustain their part in expanding the world economy.

Why you might ask does anyone need to look out for the survival of large organizations? Don’t they automatically grow and survive?

Well, if you were to look at the list of the companies in the Fortune 500 either domestically or globally say 50 years ago, and then look to see where they are now, we could not draw the conclusion that large companies survive and grow just from momentum.

This author is fairly certain that advisement of our largest enterprises cannot rest with vendors, research companies, or high-powered consulting companies. Instead, large organizations can only do well if they have great talents that make great decisions in every part of the organization.

“Take everyone’s information, just not their advice.”

Companies do not need to follow the pack, especially when to do so means that at best one can only achieve mediocrity. As with anything, understand the problem, understand how we got there, understand where the big vendors want to go, but understand where you should go based upon your business direction and competitive landscape, and then go there.

“Anyone who thinks talk is cheap never argued with a traffic cop.”

It is good to debate many issues, but useless unless you have the right people in the room.