When you start a project to build a custom application for an enterprise customer, there are always universal requirements the customer doesn’t tell you about. These are things you have to do in order to implement the stated requirements, so I call them meta-requirements.
It’s helpful to keep a checklist of these and review them at the beginning of any new project, especially if the project is in an unfamiliar IT environment. Here’s my checklist of ten.
Figure out which enterprise resources you are going to need, and make access arrangements (usernames, passwords, access rights, etc.) Some of these will be mandated by the customer (e.g. “you’ll have to use our source control system”) while others may be your own idea (e.g. “I’ll need a server to run Cruise Control”). Plan your development practices, testing and deployment practices, and runtime architecture – then make provisioning and access arrangements as required for server hosts, databases, message buses, application servers, and any accounts and passwords required for bug tracking, change management, remote access, etc. These are essentially bureaucratic tasks that may take weeks at some sites, so it’s important to start early.
Arrange software licenses as required. Here you’re typically dealing with software vendors so it may not take as long as provisioning, although in many cases the customer will have enterprise software licenses in which case you’ll have to deal with even more bureaucracy. You may need to license user interface widgets and grids, reporting tools, development tools, performance measurement and monitoring tools, etc. Be especially careful of tools like Crystal Reports which require runtime licenses – you’ll need to arrange licenses for your end users as well.
What other enterprise systems will your application need to integrate with? Is there a clear specification for how to integrate to those systems? Do their developers have well defined release schedules and rigorous testing processes? What happens if you integrate to a system but its developers release a new, incompatible version one week before your own deliverable is due? It’s a good idea to write a simple example for each system you need to integrate with (in the spirit of “hello, world”), to make sure your software can communicate with it and access it reliably.
How will data flow within your application? If it has to go to remote systems, will the data be staged in database tables, message queues, or just kept in memory? How will the application handle network outages? Will the application require asynchronous communication between components? What tools and protocols will be used for that? How will versions be managed? (If your system consists of multiple components communicating, you will either have to make sure they are always upgraded simultaneously, or think through versioning carefully.) How will the data schema be managed? (If your components have to exchange complex data structures, how will you keep everything in sync over time as the datatypes evolve?) What is the primary means of sending requests to your system anyway? SOAP? REST? Asynchronous messages? These are basic questions for any service oriented architecture, and should be considered early in the design.
If something goes wrong, how will you debug the system? Is there enough logging to show which component failed? Are your components instrumented so that you can peek at caches, queues, and other internal data structures? How can you measure performance? Can you trace the processing of a particular transaction all the way through the system? Can you easily measure average and maximum latency, throughput, and other performance metrics? These considerations can influence a design considerably. Often a lower-performing database-oriented design that’s transparent is preferable to a higher performing design which can’t be debugged if it fails.
You will need to know which end user platforms your application has to support. If it’s a .NET application, do your users have the .NET 2.0 framework available? .NET 3.0? Does it have to run in unusual environments like Citrix? Mono? Make sure you have all the tools required to build, test and deploy on these platforms.
It’s difficult enough testing a standalone application. But most enterprise applications communicate with other enterprise applications. How are you going to handle testing in this case? If your application requires a database, do you have both production and test databases for your application? Are there production and test versions of the systems that feed into yours? The combinations can build up quite quickly. For example, say a customer has both production and test order management systems, and your system needs to connect to them to fetch and process orders. Perhaps your application should include a runtime switch to choose which system to connect to, since it’s perfectly reasonable to want to load production orders into a test version of your software.
How often will you release new versions? Will you use software to support your project planning and execution? What are your source control rules and naming conventions? Will you have private branches per developer? Will you create separate branches for each new feature? What will your release and label naming conventions be? Will you use Continuous Integration? If so, which tools will you use? You should decide on these at the beginning of the project.
Every enterprise has its own process for deploying applications. You’ll need to define processes for test and production deployments, following local conventions if possible. A typical deployment process might be to select a build to deploy, have a script check the compiled artifacts into a source repository and assign it a tag or label, then raise a change request to have a system administrator run a script that checks out the new software from the repository and deploys it to the production server. This creates a paper trail of all production changes and makes it possible to redeploy old versions. For deploying client software it’s more complicated – you may need to look at ClickOnce or alternatives.
When you’re done implementing your system, how do you know it’s correct? I don’t mean writing unit tests (although that’s important) but rather – how do you convince the end users that you met their requirements? If you are rewriting a legacy system for example, and your new version has fewer bugs than the original, it will produce different output. In this case, how will you prove to the users that the new system is “more correct” than the original? This problem should be thought through during system design. It may be necessary for the new system to emit special logging information to justify its decisions. Or you can plan to run the old and new systems in parallel, while creating a third application to compare the outputs and send a daily report of differences to selected stakeholders who will be responsible for determining whether each difference is a bug or a feature. This is more of a psychology problem than a development problem.