Better Testing for Better Business

It's a fair enough question, and one that many system builders ask themselves. But it misses an important point. A solid testing strategy can help you stand out from the competition, win new customers, strengthen and improve your relationships with existing accounts, and save you both time and money.

Not convinced yet? Then consider these three important reasons for developing a strategy for testing your systems:

I hope you're convinced that testing is beneficial for both your customers and your business. So let's take a closer look at the standard parts of testing--and how they can help your business:

Component testing involves verifying a system's building blocks, such as add-on PCI, USB, or firewire peripherals, then checking to ensure they meet your client's needs. I'm not suggesting that you exhaustively test every piece of add-on hardware. In many cases you will seek out hardware that's well tested by the manufacturer, and you'll want to leverage your experience with those trustworthy products. However, verifying component manufacturer's claims, where they affect your client, can save you both time and money. That's especially true when those claims are new or subject to interpretation.

id
unit-1659132512259
type
Sponsored post

It's an unfortunate reality that what manufacturers advertise boldly on the box is not always what you get. As an example, we once bought a modem PCI card that advertised V.90 on the box and reported "connected - V.90" when in use. But a little testing showed it had negotiated down from V.90 on every connection without informing the user. It was not truly V.90 compatible for more than a few seconds!

Interface Testing (also known as Unit Testing) happens when you look at how the system parts work together. After a system is assembled and you are satisfied the components are functioning, how well do the pieces operate with each other? For example: A voice/fax modem may have checked out well with the bundled software, but will it work with the client's software that is ITAPI (internet telephony) compliant? That's an interface you'll want to test. Develop a checklist of interfaces and data paths to verify, and document that critical drivers are loaded and working. Simply check them off, then show the checklist to your customer.

System Testing is looking at the system as a whole. It's an opportunity to verify that all hardware and software are working together with well-defined "suite" of tests. Unlike component and interface testing, which focus on pieces of the system, system testing is an opportunity to treat the parts as a complete system. The goal: To confirm that what you built matches the system specifications.

Your system-test suite might include more broad testing that verifies performance expectations and load testing. (Having a testbed with other machines to emulate the customer's usage is handy if not essential for good system testing.)

Acceptance Testing proves that the system meets the customer's stated need. If the customer supplied written requirements for a system, then an acceptance test at the end of the process would verify that all expectations were met. Some customers might insist on doing this testing themselves or with you; but others might not seem to care.

What to do with these apathetic customers? Even if you haven't received their written requirements, it's still a good idea to document what you have built, then test to your documentation. Having written requirements makes good sense: You know what to build, and the customer knows what they are getting. Acceptance testing proves it.

Release or Field Testing takes testing a step further and puts your system in the Real World--or as close to it as possible. You might learn that your hardware won't hold up in a hostile environment or that your stress testing was not severe enough. Release testing is generally performed along with (or by) the actual users of the system. You might bring the system to its real-world environment for a field test.

These tests can be critical when you can't effectively simulate the PCs operating environment in the lab. This might mean connecting-up with larger systems like a PC acting as a special application server, introducing a shock-mounted system to a harsh industrial environment, or connecting a Voice Response System to a telephone switch. In each case, most labs would only be able to simulate the actual operating environment.

It's not uncommon to do some field testing early in the testing cycle if many similar systems are to be built. You don't want to wait until you've built 25 systems to find out they all have the same problem!

Benchmark Testing

While benchmark testing is not part of formal testing standards, benchmarking software for PCs has become very popular. Basically, benchmarking seeks to compare components to some well established standard.

Benchmarking software is generally easy to use. It provides useful information about how base PC systems and subsystems -- video, memory, disk, etc. -- are functioning. It also compares a system's performance against other PC systems. Most benchmarking applications can run separate tests, repeat tests to "soak" or stress-test components, and print out system inventory and test results. This is good, built-in documentation, and you should take advantage it.

The following are some popular areas that are covered by benchmark tests:

Some benchmarking software can be found as freeware. Two good examples are Veritest's WinStone and WinBench, which measure the performance of graphics, disk and video subsystems.

Commercial solutions include full-featured packages, such as CSA Research's Benchmark Studio, which qualifies PC hardware capacity, server scalability, and network performance. Other commercial packages are more complex (and costly), such as PC-Doctor, a suite of more than 350 test functions that are used in manufacturing settings. The creators of these packages also offer system-testing software to test PC subsystems. One example is load simulators, which can be used for for stress-testing Web applications.

Time Mismanagement?

By now you may be thinking, "That's an awful lot of testing. I won't have time to do anything else!" Fortunately, you don't need to conduct all these tests on every single system. Instead, thoughtfully pick and choose the ones that are best suited to the system you are building. How to make this choice? My recommendation is to base your decision on several factors, including:

Carefully considering the above items can help you design an appropriate test strategy. By "appropriate," I mean one that is time- and cost-effective for you, and beneficial for the client. Consider all implications of how the system will be used before you develop a tailored test strategy.

For example, let's assume you're building 25 PCs that will run the same applications in similar operating environments. You might want to build one template system to rigorously test in all testing phases. You could then simply verify the subsequently built systems with a carefully chosen acceptance suite.

Now let's assume you are building a single key file server that will support many users. In this case, you'd want a different strategy. With only a single critical server to test, you would want to first ascertain that the component pieces are working. Second, you would focus on system testing the performance of the hardware. Third, you would tune the software to meet customer expectations of speed and robustness.

Alternatively, you may want to build similar systems that will operate in different environments. In this case, you would do separate field tests for each system.

The overall point is to consider everything you know about the ultimate system and how it will be used. Then use this information to tailor your test plan. Also, no matter where you focus your test effort, you will want to test thoroughly. Follow your plan, but remain flexible. With a little thought, you can keep your test strategy intact while devoting time and energy to the areas where problems are more likely to be found.

I Found A Bug -- Now What?

When you find problems during testing -- and you will! -- just smile. Remember, finding trouble during testing should be considered a success. By effectively testing your product, you are preventing the customer from being disappointed, angry or disadvantaged by hardware or software that doesn't perform up to their expectations. You are also saving yourself the time, aggravation, and embarrassment of having to repair one or more systems after they are in service. While spotting a problem may seem like bad news, in my experience, customers prefer to hear that you have uncovered a problem during testing to having to report the problem to you later.

What if you are called upon to provide an official test plan? In that case, you'll want to know about the IEEE 829 standard. As you may know, IEEE (pronounced "eye triple-E") stands for the Institute of Electrical and Electronics Engineers; it's an international, non-profit association that provides guidance on many technical topics, including testing. The IEEE has established a standard for formal test documentation that includes the critical areas of test planning, running tests, and reporting.

If you do any testing that needs to be formally documented %96 say, for a large corporate client -- then IEEE Standard for Test Documentation 829 is the accepted model to follow. A good, high-level description of standard testing is available from Coley Consulting. You can also learn more about testing standards from the IEEE's own site.

Now that you understand how to create and use a test strategy, here are my top testing tips:

ANDY MCDONOUGH is a professional musician, composer, voice actor, engineer, and educator happily freelancing in New Jersey. He consults regularly on testing hardware and software.

Did you try this Recipe? If so, how did it work for you? Share your experiences on the Recipes Forum. Membership is required. Not yet a member? Then join TechBuilder.org today.