A Beginners Tutorial to Data Telecommunications Concepts
Lesson 103



********************
Please visit my updated page at http://www.dougbunger.com.
********************

 
In Telecommunications 102, we got a taste of how a connection could be made to a terminal that was not directly connected to the mainframe. We learned the difference between digital and analog, and, as a result, the difference between a modem and a DSU. In that lesson, the modem/DSU was referred to as the heart of telecommunications. It is now time to dive into the 'bolts' of a telecommuncations network. The components you will learn about are the devices that make modern data systems possible. Not all of the devices discussed are in use in every system. Indeed, many of the systems have fallen from favor and been replaced by more advanced equipment, but are presented as stepping stones to help understand system evaluation.

This lesson will continue to focus on expanding a data network both within the data center that houses the processor, and to other offices. It was necessary to isolate the discussion of the modem/DSU in its own lesson, because the modem is the key element to long distance telecommuncations. A good understanding of the modem/DSU is essential to this and most subsequent lessons.

"Why Is The System So Slow?"

As more terminals began to move to the desktop and remote offices, system engineers beginning to see the CPU's efficiency drop. This was because the CPU was having to spent a percentage of its time managing communication for each of its terminals. The greater the number of terminals, the less time the CPU had to do its computations. The time and processor power needed to manage terminal sessions is called communications overhead.

As the issue of communications overhead was studied, designers realized that CRT traffic was 'chatty,' whereas printer and tape traffic was 'bursty.' This is to say that a user sitting at a CRT may issue a command of a few letters that would yield an output of only a few lines. A given session might involve dozens of input-outputs, but none would be more than a few characters each time.

After the user stepped away from the CRT, the terminal still imposed communications overhead on the CPU by requiring it to continually check the terminal for a logon request. A printer, on the other hand, would not impose the same overhead. The CPU would only allocate processing power to the printer if it had a burst of information to transmit. In the case of a tape drive, the CPU need only allocate communications overhead when it needed to store or retrieve data. For both printers and storage devices, the CPU is looking at the device only during the times it knows it needs them.

What was needed was a device that could take over the communications overhead imposed by CRTs. This device is known as a front-end processor, or FEP. It is the front-end's job to handle the mundane aspects of a terminal session, thus allowing the CPU to spend more time computing. The front-end will only pass the input from the terminal to the CPU, after the user presses the Enter key.

The concept of the front-end processor lead to a new generation of computers. Rather than a bank of ports to be used for connecting terminals, these new processors were equipped with a set of connections exclusively for tapes drives and front-ends. These new ports have come to be known as channels. As each system was outfitted with perhaps a half dozen channels, the engineers designed them with a much higher data rate than a standard terminal port.

The introduction of channel design provided a significant boost to system performance for two reasons. The first reason was that by providing a high-speed path for tape drives, the CPU did not have to wait as long for the data to be retried. At the time of this advance, printers were seldom more than teletype equipment, and were not mechanically capable of printing at more than 60 characters per second. This meant that there was no benefit to providing the printer with a channel connect. With the exception of the master console discussed in the first lesson, all CRTs and printers were shifted to the FEP.

As was the plan, the second reason for better system performance was the decrease in communications overhead. Consider the case of the front-end processor developed by the Digital Equipment Corporation (DEC)* for their VAX; computer systems. This device was called a DEC server, or DECSA;, and could control thiry-two CRTs. (The DEC server should not be confused with the LAN servers discussed in the first lesson.) The CPU only allocates the communications overhead needed to run a single device, yet that device is running thrity-two terminals. With a front-end installed, a system that was previously stressed with thirty terminals should run as if it were only supporting one.

Data, Data, Everywhere

Consider the deployment of an early VAX system equipped with a DECSA. Three ports on the DECSA were connected to terminals in the data center, and three ports were connected to modems. These modems communicated over analog leased-lines to modems in field offices, where they were connected to CRTs. Notice that two of the terminals are in the same office, yet each has its own leased line. In this example, the terminals in Chicago would be considered local terminals, and the terminals in St. Louis and Cleveland would be considered remote. At the time, it was said that if a piece of equipment had a modem/DSU between it and the mainframe, it was considered remote.
As the situation of having several co-located remote terminals became more common, business managers asked their telecom people if there wasn't a way to run two terminals off the same line. In the 1950s the answer to this question was negative, but as the mainframe industry is a market driven entity, communications vendors some provided a solution. What was needed was a device that could except two individual terminal ports at the host end (remember 'host' from lesson one) and encode them into a single data stream. At the user end of the leased line, a second companion device would then decode the signal and direct it to the appropriate terminal.
The device developed for this task is called a multiplexer, or MUX. Early multiplexers had a connection that interfaced with a modem, and at least two connections for terminals. Since the terminals at the remote location shared the same leased line, the reoccurring cost of the second line was saved. If telco was charging $100 a month for the second line, it would be easy for a telecom analyst to justify the $1000 cost of a set of MUXs. After a year, the devices would have saved more money than they cost.

If a data line supports a single device, as is the case with the line to Cleveland, the line is said to be dedicated. The line to St. Louis is not dedicated, because it is shared by two terminals.

The MUX of today's data networks are much more advanced than their ancestors. The original MUX gave each of its terminals a slice of total bandwidth. In lesson two, we said that bandwidth was the ceiling that a line could operate. In the case of a MUX, this ceiling is set by the data rate of the modem/DSU. If the modem/DSU were operating at 2400bps, and the MUX was supporting two terminals, each terminal would get 1200bps.

When thinking about the multiplexer, do not confuse it with a front-end processor. Though there are some hybrids available, the traditional MUX does not relieve the CPU of any of its processing ability. A MUX takes multiple ports from the front-end, combines them into a single data stream for a partner MUX to break back out. What goes in one MUX, comes out the other.

The MUX allowed for more cost effective expansion of a mainframe's data network, but it was not without its drawbacks. The first problem was fact that (in the above example) the MUX allocated 1200bps for each terminal, whether the terminal was using it or not. This meant that if one user was at lunch and logged off, the second user could not take advantage of the full bandwidth: he was restricted by what the MUX allowed him. The second problem was that MUXs were always deployed in pairs: one at the host end, the other at the user end.

Eventually, mutliplexer evolution solved the first problem (as we will learn in a future lesson), but the MUX could never overcome its second flaw. Consider a bank in a major city. If Community Bank had three branchs, and they wanted to put three terminals in each branch, they would need three MUXs at the host end: one for each branch. This would be easy enough to handle. But what of First National Bank, supporting one hundred branchs? The data center would have to have a room set aside for the one hundred multiplexers. As data processing managers looked to the future, they saw this as a problem.

Not To Be Outdone...

The International Business Machine (IBM)** Corporation, offered an alternative system that was built around two different front-end processing devices. The first of the two devices was constructed to service only remote terminals, and was simply called a front-end processor. The second was called a cluster control unit, or CCU. Over the years, the cluster control unit has come to be known as a controller.

IBM felt the approach of adding another DECSA every time the network needed another thirty-two terminals would now work on a large scale. They proposed a method of distributing the communications overhead to the remote locations called System Network Architecture, or SNA. The SNA model involved connecting a front-end processor to a mainframe channel that would only talk to modems.
At the other end of the data line, the remote modem was connected to a controller. This remote controller could maintain thirty-two terminal connections.

In SNA, the front-end and controller work together to accomplish the same task as the multiplexers mentioned earlier. By design, however, the IBM FEP and controller solved the problem of having to have a separate MUX at the host site for each of the remote offices. An IBM FEP with a single leased line or switched connection to a controller could drive the remote terminals. In the situation of a company with one hundred remote offices, the result was a significant savings to the customer, in terms of money, space, and maintenance.

To further improve their system, IBM designed the controller to behave differently than the traditional MUX. As stated earlier, a MUX would set aside a percentage of the bandwidth for each of its terminals. If a terminal was vacant, the other terminals could not take advantage of this unused bandwidth. The SNA controller solved this problem years before multiplexers.

Research had shown IBM that the majority of CRT users were not interacting with the mainframe, but were staring at the terminal screen reading the system's output. If each of the dozen users received the same data at the same instant, some would require more time to process the information than others. This meant that all twelve of them would not be responding at the same time. Banking on the thought that some users are slower than others, SNA uses a first-in, first-out, or FIFO, approach.

This means that if only one person is in the office, they get the entire bandwidth. As the most data a user at a dumb terminal could sent is about 2000 characters, they would only need a 2400bps line for about eight seconds. In an office with thirty-two terminals, it is not likely that a second person would want to transmit during the same eight seconds. If a second user did attempt to transmit one second after the first person pressed Enter, his data would be buffered for seven seconds before being transmitted.

Buffering is the temporary storage of data to compensate for the time needed by hardware to process the information.

The process of buffering is similar to that of a stop sign. During normal driving conditions, a car would wait at a stop sign only until the flow of traffic is clear. During 'rush hour,' it is likely that a second car might get stuck behind the first. At two AM, when there is virtually no traffic, the car need only to pause long enough to verify that it is safe to proceed. Multiplexing is more like a traffic light with a fixed timing cycle. If the light is programmed to be red for two minutes, drivers must wait two minutes: even at two AM.

Eventually FIFO bit IBM. This was the result of customers using PCs attached to controllers to upload data files to the mainframe. The PC would get the full bandwidth of the line for however long it took to transfer the data file, thus no terminal could use the line until the upload was finished. To prevent this from impacting other users, the controller was programmed to be able to packetise the data.

Packetising means the data is cut into smaller chuncks, and sent one chunck at a time.

The receiver of the chuncks (packets) reassembles them into their original format. Each packet is numbered so the receiver will know if any packets were lost during transmission. Think about packetising like a jigsaw puzzle. Since each piece is unique, it could be shipped, unassembeled, and the receiver could reconstruct it as it is supposes to be. If any pieces are missing, the gaps in the picture will be obvious.

IBM's plan had solved both of the problems common to multiplexers, but it was obvious to the designers that one important aspect was missing. A close examination of the diagram above will show that their is not a provision for local terminals. All terminals had to be connected to a controller, and controllers had to be connected to the front-end processor via a pair of modems. This meant that terminals that were local to the CPU, would have to be (by definition) remote.

The solution to this was provided through yet a third component, a device called a local controller. As the definition of a remote terminal is one that has a modem/DSU between it and the mainframe; a remote controller has a modem/DSU between it and the mainframe. Since the local controller is not serviced by a modem/DSU, it can not be remote, thus it must be local.

If you found that confusing, you'll love this:

Since they are identical, the terminal interface on both the local and remote controllers are the same. This means a terminal that is installed on remote controller, can also be installed on a local controller. In other words: you can not look at a terminal and tell whether it locally or remotely attached.

The Obvious Next Question

By providing you with the two major architectures that have survived four decades of mainframe evolution, I would hope you would now be asking yourself which system is the best. As Hamlet put it: "There lies the rub." For, you see, the answer is unequivocal-- It all depends.

On what you ask? Principally, the number of remote locations your mainframe must feed.

The bottom line is that the IBM system requires the purchase of an FEP that will cost in excess of a quarter million dollars. This means that to justify that expense, the FEP and remote controllers would have to feed upwards to one hundred sites. This is not a hard-fast rule. Some companies with fewer sites may choose the IBM solution for other reasons. You can bet, however, that if a company has less than twenty locations, they are using multiplexers and a device similar to the DECSA.

This is a good time to interject another point: The DECSA was perhaps one of the most widely distributed, small system, front-ends, making it worthy of it's special mention in this lesson. At the close of the twentieth century, there were few DECSAs that had not been replaced by new products such as the DEC Mini-Server and DEC Hub 90 Series. Furthermore, IBM has produced a piece of equipment for its mini-computer line that performs the same task as the DECSA. Since DECSA is a trademark of the Digital Equipment Corporation, IBM's counterpart called a concentrator. The technical name for both the DEC server and the concentrator is actually asynchronous terminal controller.

As long as we are comparing the two systems, there is a vitally important detail that must be clarified. It was mentioned earlier that the local controller does the same job as the DECSA. This is correct in the respect that they are both front-end processors, relieving the CPU of communications overhead. In an earlier diagram, we saw a DECSA servicing both terminals and modem/DSUs (which extended terminals). The SNA controller is not designed to be connected a modem/DSU; as such (without exotic or obscure interconnections), you will not see a remote terminal behind a local controller.

There's No Such Thing As A Remote Channel

If you think about it for a moment, a remote channel would have to be a channel that has a modem/DSU between it and the mainframe. Since a channel is a connection between the mainframe and it's front-end processor, which extends terminals, all channels are local. Right?

Of course that's right, and don't forget it-- All equipment that is using a channel interface is locally attached. Hence, all terminals connected to equipment that is using a channel interface are local terminals.

Now that we have gotten that straight, consider the situation of a telecom analyst that is caught-up in a corporate merger. (Pay attention: this could be you.) The Board of Directors has just announced that the company has acquired a business seven hundred miles away. They want to have fifty terminals installed in the remote office, and they want to know how you are going to accomplish this task.

If your company had already purchased an IBM FEP, you could drop a couple remote controllers in the office, and be done with the project. Unfortunately, the size of your existing network never justified the cost of an IBM FEP. This means you are either using a DECSA type front-end, or are running all your equipment with local controllers.

Let us first consider extending fifty terminals to a remote office from a DEC system. This would require the addition of two more DECSAs at the home office, and at least two large multiplexers at both ends. Furthermore, the multiplexers would have to be connected to line with sufficient bandwidth to sustain the fifty terminals.

Should your system be an IBM computer using on local controllers, the story is different. Per that vitally important detail mentioned in the previous section, you can not extend terminals from a local controller. This means (if you haven't figured it out yet)-- you're screwed.

But wait! Would IBM leave you without a solution?

As it turns out there is another device that will save the day. Being faced with this crisis before (without a solution) DEC and IBM both have devices that can connect to a channel at the host site and extend the interface to a remote site. The device that allows equipment to use a channel interface at a remote location is called a channel extender.
The channel extender buffers the high speed data from the CPU, converts it to a standard modem interface, which puts the signal onto a data circuit. At the remote location, the companion channel interface reverses the process.

It is important to realize that an device connected to a channel extender is a local device, as it could be installed at the host site as well. Having a channel extender installed in a network can further confuse the issue of local and remote terminals. As we said earlier: all channels are local!

Could SNA Get Any More Confusing?

We're almost done... Don't get discouraged.

Suppose for a moment, that your company did have a front-end processor attached to it's CPU. As was stated previously, the fifty terminals could be driven off a couple of SNA controllers. We say 'a couple', as the industry standard for SNA controller design is a unit the drives 32 terminals. Since we need fifty terminals, we are faced with another slight problem.

Lucky for you, one of your ancestral peers faced this problem in bygone days. The solution that should immediately come to mind, would be to install a multiplexer on the line to drive the two controllers. This would work, but would defeat the advantages of the FIFO nature of SNA traffic. In other words: if your connection to the remote office is 9600bps, and you install a multiplexer, each controller would get 4800bps. As such, each terminals maximum bandwidth would be half the lines rate.

It was apparent that the problem needed a solution that would maximize the advantages of the SNA architecture. The solution came in the form of a device called a port sharing device or PSD. This device installs between the modem and the controllers. It differs from the traditional multiplexer by allowing whichever controller asks to transmit first, to have the complete bandwidth of the data line. A traditional multiplexer offers one bit to the first controller, the next bit to the next controller.

For a port sharing device to work properly, the controllers must be configured to packetise their data. If they are not packetising, one controller might monopolize the line, causing the second to crash. When this does happen, the first controller is said to be streaming. (We'll cover this in more detail in a later lesson.)

A final note on the port sharing device: unlike most of the ingredients of the telecommuncations pie, this device does not have a standardized name. The first port sharing devises were referred to as modem sharing devices or MSD. In today's mostly digital world, a person using the term modem sharing device is 'dating themselves' as an old timer. After all, a true MSD can only be installed on an analog line. (refer to Lesson 102)

Another term you may hear is multiple access unit, or MAU. Some vendors chose to call their devices MAU's because they provided multiple units access to the same modem/DSU. Unfortunately, when personal computer local area network technology spilled forth unto our planet, IBM chose to name a principle piece of their LAN hardware the multistation access unit, or MSAU. Third-party vendors wishing to avoid lawsuits, called their MSAU clones MAUs. This confused the telecommuncations industry to the point that the term MAU fell into disuse. Again, someone that uses the term MAU is dating themselves as 'pre-LAN.' (The MSAU will be covered in a future lesson.)

The bottom line: your best bet is to stick with port sharing device, but don't correct your boss for calling it something else.

It's Amazing It Works At All

The telecommuncations industry is the ultimate example of Darwinian Theory. As a market driven industry, only those products that satisfy the techies and the pencil pushers have survived the test of time. This lesson has only discussed the two major systems that have evolved and adapted. Several lesser systems are now extinct. By focusing our studies on DEC's asynchronous terminal controllers and IBM's System Network Architecture, we are continuing to expand our knowledge base. Mastering the structure of these two systems is vital to the successful telecommuncations professional.


* DEC, VAX, and DECSA are registered trademarks of Digital Equipment Corporation.
** IBM is a registered trademark of the International Business Machines Corporation.

Lesson Index Email Douglas Bunger Go To Homepage