Skip directly to searchSkip directly to the site navigationSkip directly to the page's main content

View System Documentation - Design Decisions

This page lists the main high level design goals and the main functional requirements for the IBIS-PH Public View System. Following the lists, a brief history is outlined followed by a description of the current system. The balance of the page focuses on providing some insight into the decisions that determined the current shape of the system.

High Level Goals

  • Usable by 90+ percent of all web browsers without any special software
  • Rich user experience
  • Simple Architecture/Easy to Maintain
  • Data driven
  • Ability to be adopted by other agencies
  • Open system with components that can be used by other systems

High Level Functions/Requirements

  • Display non-dynamic HTML pages
  • Needs to provide user friendly and interactive web pages
  • Provide consistent high level and context sensitive web page navigation
  • Gather user survey information
  • Display indicator profile contextual data
  • Display indicator profile values in a chart
  • Build/define IBIS-Q query
  • Display IBIS-Q results
  • Display IBIS-Q result values in a chart
  • Display IBIS-Q result rankings as a choropleth map
  • Mechanism to publish data from the IBIS-PH Data Maintenance/Admin System
  • Provide ability to preview IBIS-PH Data Maintenance/Admin System data
  • Able to interact with other non-IBIS-PH systems
  • Able to provide and consume XML data
  • Deployable to different server environments
  • Distributable
  • No or low software licensing costs
  • High quality interactive charts with SVG and JPEG options
  • Charts able to handle missing values, conf limits, and other misc items
  • Pages need to have a printer friendly output format option
  • Needs to be secure (authorized and authenticated) for certain requests
  • Needs to provide a mechanism to localize paths, control flags etc into a text file that can be configured for each deployment environment
  • Need to be able to specify different/separate data content directories for site/deployment specific XML documents
  • Need to be able to "hot patch" files without having to redeploy app and without a system administrator being involved
  • Needs to be web searchable/indexable/crawlable
  • Needs to use pluggable, standard frameworks

History (Major Milestones)

  • 1994-1999 - Health Data Query System (ACTION2000, HI-IQ, MatCHIIM)
  • 2000 - Utah is awarded $54,000 from HRSA on the Data Utilization and Enhancement (DUE) grant. A system scope/requirements document is written that describes the desired web based data dissemination system.
  • Jan 2001 - STG is contacted to look at the system and give an estimate on the work effort. STG responded that to create the entire system would best involve a data warehouse solution and would be a 2-5 man year effort. A less ambitious proposal was also given which involved storing Indicator Profile data in a Utah State owned Oracle relational database and using a Utah State owned Actuate reporting tool to provide the end user indicator profile reports to the public. A web based data maintenance interface was also provided as an option.
  • Mar 2001 - Dr. Lois Haggard was able to acquire enough additional funds needed to implement the less ambitious system and STG was awarded a Time and Material contract.
  • Summer 2001 - Dr. Lois Haggard writes a grant proposal to the CDC based on the success of the MatCHIIM/HI-IQ and the promising direction of the current Indicator Profile system.
  • Nov 2001 - STG completed the original work on time and within budget.
  • Mar 2002 - Utah is awarded a 5 year CDC Assessment Initiative grant to expand and enhance the system.
  • May 2002 - STG is contracted with to replace the Actuate reporting engine with a state of the art Java web application that utilizes AgileBlox SVG charts.
  • Spring 2003 - New IBIS-PH View system is fully operational.
  • Winter 2003 - New IBIS-PH Admin system is updated and numerous View system tweaks.
  • Fall 2004 - View system updated to interface with HI-IQ which was updated to return XML data instead of HTML page.
  • Winter 2005 - Query maps implemented.
  • Spring 2005 - Arizona Adopts the IBIS-PH View system.
  • Summer 2005 - Converted IBIS-PH View system to the standard Java Spring framework. This was done to help the system become more pluggable and able to be placed as an open source system with the hope of other states/agencies being able to adopt and extend the system. The thought was that if it were developed in a standardized framework that this would enhance the adoption rate.
  • Spring 2006 - IBIS-PH View system is documented.

Current System

The system is built on open Java web server software using the pluggable Spring Web MVC Framework. Servers that support Java are available from many vendors and run on many hardware platforms. XML is the industry standard for data interchange and is used for the data storage and data transfer mechanism. Since the applications use standard web internet HTTP communication, the entire system is deployable to a wide variety of environments and servers. Some of these components may also be wrapped to be used in a web services environment. The system is built to be data driven with as little reliance on custom Java controller code as possible. This lessens the dependence on custom software but also requires that the data be put into a similar structure. Since the data are stored in XML, and since one of the design goals was to have as little Java code as possible XSLT was chosen to be used to produce the web pages. Listed below are some discussions on the whys and pros/cons of these decisions.

Why Java

When this project was first being developed, Microsoft's .net did not exist as a commercial product plus it only runs on MS operating systems. Since many state agencies (including Utah) are running servers that are NOT MS-OS based this was not an option. Other technologies were also considered like report writers, Cold Fusion, and other web based scripting languages. Each had compelling reasons to adopt but they also had issues (cost or robustness limitations etc). None of these other solutions had robust enough charting engines so a custom Java solution or chart server was still needed. Since Java is very robust, runs on virtually everything, is free, and since the initial system was very simple, it was a reasonable solution.

Why XML/XSLT

The decision to go with XML/XSLT instead of Java Server Pages/Relational Database Management System (JSP/RDBMS) involved/was based on the stated goals of being expandable, able to interact with disparate systems, easy to maintain, and wanting to have as little custom Java code as possible. Also, NEDSS was also needing a presentation engine as it was hoped that a solution could be developed that would work for both. In the year 2003, there was allot of buzz about using XML as the standard to interchange health data. New York City was also very interested in the system and was wanting something that was more secure and did not require a live database connection for a public reporting system. Based on the above, it was determined that the IBIS-PH View system would store the data in XML data files and use XSLT for the HTML page presentation. This system worked well in that only a little bit of Java code was needed. As a matter of fact, there was some time spent researching Cocoon and not having any custom Java code. However, the Java needs were very simple and it was determined that it was easier to maintain a simple servlet than implementing the entire Cocoon framework. Initially the system only had 3 servlets: 1) XML/XSLT transformation, 2) Indicator Profile XML chart servlet engine, and 3) System Servlet which handled the Admin system IP XML publish requests and clearing the transformation cache, sending user survey responses to the Admin system to be saved to the RDBMS, and providing misc system info. Very clean and simple...

XML/XSLT pros/cons VS. JSP with Java Objects

XML/XSLT requires almost no Java code but the XSLTs can be very complex when compared with JSPs. The JSP/POJO/RDBMS solution can be much more robust in that there are much better options for grouping/retrieving data as well as having much better control of logic and control using Java vs XSLT. JSPs are also much easier to create and maintain with HTML/JSP talent much more prevalent than XSLT knowledge. Performance wise they're both acceptable and both have the ability to cache the output page to an artifact file. For openness, the XSLT/XML solution is much easier to extend and deploy as the system consists of XML text data files and XSLT text template files when compared with the cumbersome Java object to RDBMS mapping/Java object/JSP system needed. XML vs. an SQL RDBMS is a tough call since both have advantages. XML can be created to be totally self contained with hierarchical data structures. RDBMS data can be sliced and diced much easier and not requiring any any conversion. XML is more easily published since it can be totally self contained where copying separate records from different db tables can lead to data inconsistencies as some records might need to retain an older referenced table value that is now updated due to a newly published indicator.

Why not a Data Warehouse

In hindsight, if it was known that Utah would have had the money they ended up with then a data warehouse with a good web reporting tool would have been pushed much harder. However, there was not enough funding to even purchase the tools and do an initial analysis let alone any real work. The first simple parts of the system were needed to demonstrate that a system could be built and which enabled Utah to get the additional funding. Also, the current solution runs on virtually any server without any special hardware requirements or commercial software (it runs fine on a simple 1GHZ PC with 512k RAM and a 40GB drive running Linux and Tomcat), and very little system administration is needed. The downside of the current the solution is that custom code also requires expensive Java expertise and XSLT expertise to extend the application where as a Commercial Off the Shelf (COTS) data warehouse solution requires fairly expensive licenses and a yearly maintenance contract with limited customization but good support and great new capabilities and features (as long as the vendor supports and extends the product).

Initial Simple Specific System VS. Current Complex Loosely Coupled System

The original design goal of minimal Java code was possible since the system only needed a few basic functions (XML/XSLT transformations, and IP Charts). Since then, the interactive query system was added with charts, maps, and drill downs. Artifacting and GZIP compression was added to help improve system performance and the need to have the system look like a static site instead of a dynamic site so that web crawlers could index the site was desired. As the system grew to adopt these features the design of a few simple, specific servlets was out grown. As the servlets and controllers grew in complexity the system was also becoming less pluggable and harder for a Java developer to maintain/extend the system. The Spring Web MVC Framework was implemented and adopted to help with this matter. Since the code was getting complex, it was determined to at least keep the system standard and as loosely coupled as possible. As of 2005, the Spring framework provides the best state of the art, supported system to accomplish the stated goals.

Depending on the needs, the old "simple" way of specifying the XSLT and the XML file as part of the request URL still exists. If the Query front end is desired then those Java controllers and views will still be needed. Also the Indicator Profile View type XSLTs would need to be rolled back to the older versions that implemented DHTML for the chart/map HTML graphic image elements. Doing this cuts out a great deal of the system configuration files but also increases navigation complexity, makes it so that artifacting can not be used, and makes it more difficult for search engines to crawl the site.

Spring VS. Struts

Spring was chosen because it is a much more pluggable framework AND because Struts is really geared toward JSP with Java model objects. The Spring framework handles the XML/XSLT MVC pattern much more cleanly and deals with things like security (ACEGI) much better. The major plus for Struts is that there are so many more Java developers that are geared to this type of development. Other than that, Spring is a much simpler, more pluggable, and is not as limited when comparing the frameworks for a web platform.

Dynamic VS. Static Content

Listed below are descriptions of the three basic options when discussing how to create and serve up this system's content (which IS 100% dynamic e.g. this site does not have any static HTML pages):
  1. STATIC - The View System could be programmed to create all the system's pages before they are requested. When a new indicator profile or a new query module or a new "PAGE" XML document is published all of the associated HTML files that that XML document is associated with could be created and saved to disk. That way only a simple web server would be needed to simply stream back HTML files. The downside to this is that there are some parts of the system that are going to have to remain dynamic (query modules/results, ability to override a chart graphic type or image type, not to mention the limitations for future features.
  2. DYNAMIC - This option keeps the system simple as it creates the page's content each and every time it is requested. The pros to this is approach is that it is handles everything with much less complexity. Any time a new file is published it is automatically handled. Changing the chart name or graphic type does not impact the system at all. The downside to being totally dynamic is that most content on the site is fairly static as most pages only change once or twice a year. Plus to make the pages web search engine crawlable the system was programmed to look and navigate like a static site. This static site look enables the pages to be easily saved to a disk file based on the path and filename and then simply streamed back the next time the same request is received. This helps cut down on the Java application server's memory and CPU loads plus it enables content to be delivered to the client faster.
  3. MIDDLE GROUND - This approach, as its name implies, is somewhere in the middle which means that the system retains its ability to dynamically handle all of the requests made and implements a mechanism to save created content to a disk file which can then be used for similar future requests. This latter feature is called "artifacting" in the IBIS-PH View System. See the Artifacting Page for a more detailed discussion about this feature - what it is, how to enable/disable, and issues.

SVG VS. Macromedia's Flash

One of the main goals was to have scalable graphics so that the chart could be resized with minimal loss of detail. All bitmap type images do not allow for this (jpeg, gif, bmp, png etc.). Only vector based solutions allow for this. It was also highly desirable to provide interactions with the chart and/or map. Only SVG and Flash allow for this. The problem with Flash is not the purchase price of their development tools but the problem of importing a flash chart into an office application. The user can not simply copy and paste it. Nor can they embed the chart as an object. Flash has much better browser plugin support and their tools are very good. SVG is an open standard and at the time had great momentum. Most mapping products also have options to import and export SVG files. However, since Adobe purchased Macromedia (Flash) it has drastically slowed SVG browser plugin development.
The information provided was retrieved on: Sun, 21 July 2019 14:19:17.

Content updated: Wed, 4 Nov 2015 09:26:28 MST