TUCoPS :: Phreaking Technical System Info :: bctj1-03.txt

Overview of the Bellcore Metrocore network



OVERVIEW OF BELLCORE METROCORE\*(Tm NETWORK


A. Albanese, M. W. Garrett, A. Ippoliti,
M. A. Karr, M. Maszczak, and D. Shia

Bell Communications Research
Morristown, New Jersey 07960, USA
(201) 829-4291


The Bellcore METROCORE\*(Tm network is a test bed prototype of a
metropolitan area network (MAN) for exploring new network concepts
and technology applications in broadband communications.
The present design and status of the METROCORE network
research prototype are outlined.
This system operates at 150 Mb/s with the potential for
upgrading to 2.4 Gb/s.
A basic hardware architecture has been designed which consists
of a network interface (media access control (MAC) layer), and
several independent, modular units that interface to various services.
At the MAC layer, integrated services are supported by giving priority to
those traffic types requiring bounded delay.
The organization is modular, allowing components and services to be added
or improved while redesigning the minimum amount of hardware.
The software network architecture has also been designed and it includes
a network controller to configure, monitor, and administer the network.

The Bellcore METROCORE\*(Tm

METROCORE is a trademark of Bell Communications Research, Inc.

This paper will be presented at the IFIP WG 6.4 Workshop HSLAN'88,
April 14-15, 1988, Liege, Belgium.

network is a Metropolitan Area Network (MAN)
prototype using an enhanced version of Fasnet [LIMB82]
as its Media Access Control (MAC) protocol.
This prototype
serves as a test-bed for research in the integration of
diverse services onto a unified packet switched network.
The protocol provides special access for traffic with
delay constraints.
There is also a mechanism for ensuring
fairness among the active nodes.
Nodes are connected to each other by a dual bus network
similar to the architecture being considered by IEEE 802.6.
Rather than using passive taps as in the original Fasnet proposal,
active regeneration is used with optical fault bypass devices
and a stand by fiber optic link to insure fail-safty (Figure^1).



"Metrocore implementation with four nodes."


The hardware designed for each node includes a
MAC layer circuit which interfaces the node to the
network.
This exchanges packets across a specialized internal
communication bus with a variety of "service processors," (SPs)
which serve as interfaces to the services running on the network.
The internal bus is called the "Fast Packet Bus" (FPB), and
is implemented in a similar way to the backplane of a double VME card cage.
However, only the physical VME specification is used since
the VME electrical definition,
like most computer backplanes,
can not provide as much bandwidth as we desire for this application.
Each SP ideally occupies one board, allowing several services
to be present in a node.
In addition to the MAC circuit and the SPs, there
is a node controller circuit which keeps track of statistics
for billing, management, administration, and operations and takes care
of diagnostics and fault control.

The service processors we anticipate include services such as
telephone, television, computers, disk servers, and a LAN interconnection.
The LAN service processor was designed first since
it is an important service to explore in the near term,
and because this allows us to connect (indirectly) anything that
already uses Ethernet.
This processor consists of five modular circuits implemented in
four boards that may be modified for use in other SPs.
This processor implements a transparent
protocol which encapsulates Ethernet packets for transport
across Fasnet and retransmission on a remote Ethernet.
Other suggested protocols for service integration for interconnected
LANs [FRAT87] and routing for interconnected MANs [STRG87a,b]
have not been implemented yet.

Figure^2 shows the system block diagram.  Each component is described
below, followed by brief discussions of the software architecture,
and a Gb/s network enhancement.





"Metrocore node."


"Optical Transmission System"
We are presently using an off-the-shelf transmission system
with single mode optical fibers at 150 Mb/s and
a wavelength of 1.3 &mu m&.
This system was chosen because it provides a virtual clear channel,
and can tolerate the long strings of consecutive zeros which occur between
the head-end node (which sends blank packets) and the node that uses
a packet by filling in the empty data field.
The system does its own scrambling and clock recovery.
Connections between nodes are point-to-point, but
an optical fail-safe devices may be inserted to make the node appear
transparent in case of a local node failure [ALBA82, LOH86].
A stand by fiber optical link allows network reconfiguration in case one of
the links is not functioning.
"MAC Chip"
This chip is the major component of the MAC circuit.
The MAC chip executes the media access layer protocol and thus governs the
flow of data onto the network.
Two chips are needed for the two directions of traffic on the network.
The design is implemented as a
semi-custom chip manufactured by Motorola, using emitter coupled logic
(ECL) technology for high-speed performance.
The maximum speed was measured at 310^Mb/s.
The MAC chip replaces a whole board of SSI/MSI chips in a
previous prototype.

The MAC chip reads the various fields of the packet header to
execute proper transmission, and signals interface circuits
for reception of packets.
Other functions include synchronization of the chip to the incoming packet
stream, generation of timing signals and 1 &times& 16 serial to parallel
conversion.
Each chip contains the circuitry needed to execute special
functions associated with the end nodes.
Thus, as a fail-safe mechanism,
a middle node may become the head-end if the original head-end fails or
if the network becomes severed.

The chip was designed and simulated on a Daisy CAD workstation using
a library of components made available from Motorola.
The Motorola
computer system was then used for the final timing simulation, race-condition
test, placement and routing of the metal paths.
This chip is equivalent to 2500 ECL gates and comes in a 149-pin PGA
package.  98% of the available circuitry and pins were used.

"MAC Circuit"
The MAC circuit consists of MAC chips, address recognition circuits
(address filters) , bus arbiters, a head-end enable detection circuit
(activity detect), a head-end access field generator,
ECL/TTL converters, control circuitry, and a power-up reset
circuit (see Figure^3).
There is generally two of everything due to the two unidirectional
busses used in Fasnet.



"Medium Access Control Circuit."


Each MAC chip feeds incoming data, in a 16-bit parallel word, into an
address recognition circuit which compares the address field
of the packet to a group address and a single node address.
Thus a node may be addressed individually or as part of a multi-cast group.
The link-layer address is structured so that the first bit (most
significant) indicates group or node address (0/1), the next
11 bits identify the node or group, and the last four constitute
a sub-address indicating the intended service processor.
The address circuit is implemented
in TTL logic with ECL/TTL converters between the MAC chip and itself.
Data and control information is passed on to the service processors
through the two outgoing paths to the Fast Packet Bus.

Four bus arbiter circuits are included for the four incoming paths
from the FPB.
These paths carry data destined for transmission
from the service processors to the MAC chips.
There are "data" and "voice" paths for each of two MAC chips.
(The data and voice paths are not limited to those services but are used
to differentiate between traffic which can tolerate network delays (low
priority) and those which can not (high priority).
The paths are separated
because the two Fasnet lines operate independently, and the two cycles
(for different priority traffic) are also independent.
The bus arbiters choose which of competing service processors gets
next access to a given path of the FPB, and therefore to the network.
The choice is made using a simple round-robin scheme.
Processors using the same FBP path
are not further prioritized, although this could be done
using a more complex bus arbiter circuit.

An activity detection circuit monitors the
synchronization at the receiver to discover whether
the node is to be head-end or not.
The head-end node will
have a desable receiver for the line that it heads and thus
would always remain head-end.
A middle node (Figure^1) will become
head-end in the event of a failure just upstream of itself.
If the receiver does not provide a carrier detect signal,
the MAC synchronization function could be used:
When the MAC fails to synchronize for 2 msec (for example),
it turns on its head-end function, and then monitors the incoming
data stream for a valid Fasnet synchronization pattern.
Reception of such a pattern would then turn off the head-end
function for that node returning to its middle node function.

The access field generator circuit (see Figure^3), implemented as
a PLA (programmable logic array) is required if a node is a head-end.
This takes care of "turning around" the start and end bits of the Fasnet
access field, which in turn operate the network cycles.
Every
node which might be head-end must have this.
It is possible
to avoid putting this circuit in every node; the tradeoff being
that all nodes between the fault and the first head-end-capable
node would go down.

The MAC chips require information to determine system parameters such
as packet size etc.
These, as well as the node addresses are stored
in a small register and are generally programmed on power-up by a control
circuit in the node.

The remaining functions in Figure^3 are the power-up reset circuit
which holds reset high until the power supply voltage stablizes;
and a local oscillator which serves as system clock in head-end nodes.
"Fast Packet Bus"
The backplane of the card cage has two 96-pin connectors which carry
the two incoming paths to the processors, and the four outgoing paths
to the MAC circuit.
Several pins also supply power and ground (TTL) to the boards.
In addition, there is a synchronous serial line which allows the
node controller SP to down load and read the stored MAC parameters.

On each incoming path, there are 16 bits of data; a four bit sub-address
which identifies the processor to receive the packet; a four bit control
field which comes from the beginning of the information part of the
packet, and may be used optionally to give the SP a code already parsed
from the packet.  A signal called
"Packet Ready" indicates the beginning of a new, valid packet, and
two clock signals (at about 10 MHz and 20 MHz) are given from which
the MAC circuit's two phase clock (&phi&1 and &phi&2) are derived.

On each outgoing path there are again, 16 bits of data.
A strobe
signal from the MAC chip prompts each word from the processor.
Each processor (up to four) has its own set of handshaking lines with
the bus arbiter.
If more than four processors use a single path, then this handshaking
may be daisy-chained on every fourth processor.
The processor wishing to
transmit raises its "request" line; the bus arbiter selects the next
requesting processor in turn and responds with an "acknowledge."
The processor
then raises its first word of data together with
"data ready."
When the MAC gets access to the network, it will strobe the processor
for the following words.

This type of internal node bus uses a lot of pins (192).
The choice could be made to reduce the number of wires by time multiplexing
signals of the various buses.
This however, would require a large and
complex buffer on the MAC circuit, which would be more difficult
than having a large backplane bus.
The present FPB has the advantage
of not presenting any bottleneck at all beyond that caused by
the node access to the network (i. e. same-priority, same-direction packets
must compete for the network at the MAC chip anyway).
"Node Controller Circuit"
When a node is powered up, all processors as well as the
MAC circuit must know the node's address.
This circuit is responsible for
providing this information as well as certain MAC chip parameters.
This circuit would also monitor
statistics about the node's activities for billing etc.
There may
be a central control node for the whole network with which this circuit would
communicate to reconfigure addresses, diagnose network failures or exchange
statistical information.
The controller circuit would
therefore be able to construct Fasnet packets and transmit and receive
like any other service processor.
All the functions of the node controller were implemented in software that run
in the 68000 of the Ethernet Circuit in the LAN processor (see Figure^4).
"LAN Service Processor"
This processor does the function of a LAN/MAN bridge (see Figure^4),
it carries packets transparently between
two Ethernet LANs which may be separated by considerable distance.
A description of the functionality and philosophy of design
may be found in [ALBA86, DEGR86].
The LAN service processor consists of five circuits implemented on four boards,
which could probably be integrated down to one board using a VLSI technology
like that used for the MAC chip.
Each circuit performs a special task:
(1) Input Buffer, (2) Output Buffer,
(3) Depacketizer, (4) Packetizer, and (5) the Ethernet Circuit
and Node Controller.
These are general building blocks for any service processor.
The buffers are rather large, since a LAN is expected to generate more
traffic than a computer interface would, for example, but the design
can be scaled down and used in any case.
The packetizer and depacketizer
are bit-sliced processors offering very high throughput for communications,
but weak processing power.
The Ethernet circuit is a specialized processor
with potential for complex processing.
It can be tailored for another
purpose by substituting another interface for the Ethernet dependent
parts.
This processor is not, however, suited as a general purpose
computer because of the limited memory configuration.
Rather, it should be seen as a powerful dedicated controller.



"LAN Service Processor and Node Controller."


.H 2 "Input and Output Buffer Circuits"
The input buffer connects to the incoming paths of the FPB.
Each of the two large
circular buffers are synchronized to their respective MAC clocks.
Read
and write pointers are generated for each buffer, and the circuit
monitors whether the buffers are full or empty by
comparing the values of the two pointers.
The buffers themselves are
16K &times& 16 RAM chips with very fast (35 nsec) access times.
In this circuit, the 2-phase clock from the FPB is decoded and used.

On the other side, the buffers feed the depacketizer circuit, which uses
the clock of whichever input buffer it is reading at the time.
Two finite state machines (PLAs) which are referred to as the
"producer" and "consumer," govern the flow of data into
and out of the buffer, respectively.

The output buffer takes data from the packetizer circuit and stores it for
transmission on the outgoing FPB.
Many packets may be stored in
the single buffer along with information about which outgoing path(s)
each one is to be transmitted on.
Packets destined for both directions
on the network must be transmitted on each direction separately, since
the two MAC chips
cannot possibly be synchronized.
"Depacketizer and Packetizer Circuits"
The depacketizer and packetizer circuits are almost identical in hardware
but different in software.
The processor consists of an AMD
2910 bit-sliced sequencer and an AMD 2116 ALU chip.
The function performed
involves stripping Fasnet headers from incoming packets and storing the
Ethernet fragments away in a DRAM on the Ethernet Processor circuit, and
then indicating to that circuit when a packet is completely reconstituted
and ready for retransmission on Ethernet.

The depacketizer circuit also contains an Ethernet address filter, which
now consists of a hashing algorithm, but would ideally be implemented
as a Content Addressable Memory (CAM), which may
be available in the future.
Ethernet packets are filtered so that only the ones intended for local
Ethernet nodes are retransmitted.
Others are dropped.

The packetizer is the same processor as used in the depacketizer circuit,
except that the task here is to transform whole Ethernet packets into
bursts of Fasnet packets.
The Ethernet packets are completely and
transparently encapsulated.
Addresses are not converted.  Headers are
added indicating Fasnet addresses and control information.
An Ethernet
address filter lets only those packets not destined for local Ethernet
nodes to escape to the MAN.
"Ethernet Processor Circuit"
This circuit consists mainly of two dynamic RAM controller chips, with their
respective banks of DRAM, which interface to the packetizer and depacketizer
circuits;
an AMD Local Area Network Controller for Ethernet (LANCE) chip; and
a 68000 microprocessor chip.
The 68000 takes care of initialization and
memory management, and the LANCE, which is itself a dedicated
microprocessor, takes care of getting traffic onto and
off of the Ethernet.
In addition, there are two UARTs on the 68000 which
allow a terminal and a modem to be connected to the node
for diagnostic purposes.
"Network Software Architecture"
For network management and operation purposes each network node
is structured according to the Open System Interconnection (OSI)
Reference Model to support communications between all the
nodes and the network controller.
Figure^5 shows the software network architecture.
The Management Control Unit consists of network management
and node management units.


.FG "Network Software Architecture."


The Network Management unit monitors the status of the network, collects
statistical data from each node and  stores it on a disk, provides hard
copies of network statistics, and handles network set-up
and configurations of the Ethernet LAN interconnections via a human
administrator.

The Node Management Unit performs self-test on power up and upon receiving 
requests from the Network Management Unit.
It also provides remote reset and 
download capabilities to facilitate software updates and maintenances.
It maintains configuration data and collects statistical data about local
operations.
Upon receiving requests from the Network Management Unit,
it reports the most recent information about the node operations.
It also monitors any fault events on the node.
When a fault happens,
it attempts to either correct the fault problem or to collect more information.
In any case the fault is reported to the Network Management Unit along with
the result of attempted recovery.

In order to communicate reliably to each other, the Network Management Unit
and the Node Management Unit require the Transport layer services.
The Transport layer uses the Data Link layer services to handle
connection setup and termination, flow control,
multiplexing, fragmentation, and error detections and corrections.
The Data Link layer deals with sending and receiving packets to and from
other nodes without bit pattern errors. 

Please note that these reliable 
services are provided only for the management purposes.
They do not interfere with transparent wiring services offered
by the LAN processor.
In another word, these protocols 
are logically separated from the transparent wiring services.
"Multi-Gigabit/second MAC Circuit"
This work investigates how to share the multigigabit/second
transport capabilities offered by lightwave systems among a multitude
of users and how to integrate the many broadband and narrowband services
that a user may require.
We have demonstrated a 1.2 Gb/s system to provide packet
communications in the distributed switching system shown in Figure^6.
The system provides multiple access at 1.2 Gb/s and consists
of a unique node architecture that combines available
lightwave, gallium arsenide, and silicon technologies [KARR87].


"Gigabit/second access interface."


Gallium arsenide (GaAs) multiplexer and demultiplexer ICs represent
today's most advanced technology.
Silicon very large scale integrated (VLSI) circuits have up to 100,000
transistors on a chip,
but generally do not operate in the Gb/s range.
Commercially available GaAs ICs will work above 1 Gb/s
and their circuit densities are rapidly advancing.

We use 8:1/1:8 multiplexer/demultiplexer GaAs ICs made by
Gigabit Logic Inc., together
with our silicon MAC IC to divide the 1.2 Gb/s bandwidth into
8 channels of 150 Mb/s.
These devices are considered "medium-scale integration" (MSI)
and contain about 200 gates.
The data is controlled by a packet switching protocol using control fields
only on one of the channels [GARR86].
The protocol, based on Fasnet, is executed by the Media Access Control (MAC)
chip.
Other channels carry only data and are synchronized to the control channel.
Thus each user has access to the full 1.2 Gb/s capacity on a per-packet basis.

Some gallium arsenide integrated circuit components are becoming available
in the 2 Gb/s range.
A 16:1 multiplexer and a 1:16 demultiplexer operating at 2.4 Gb/s were built
which could
upgrade the 150 Mb/s MAC circuit to 2.4 Gb/s.
Metropolitan area network technology enables users currently
connected to local area networks, geographically limited
within one building or office complex,
to extend their range of networking interconnectivity.
Large corporations that have several LANs operational at separate sites
interspersed within an urban area will be able to utilize their resources
more effectively and make them more widely available through the use of
MAN networking.

The technology is demonstrated in a laboratory test bed.
Working prototypes demonstrate that present technology is capable to
interconnect 10 Mb/s LANs over MANs implemented at 150 Mb/s
and to be upgraded to 1.2 Gb/s in the near future.

"REFERENCES"


[ALBA82]
A. Albanese,
"Fail-Safe Nodes for Lightguide Digital Networks,"
Bell System Technical Journal, Vol 61, Nr 2, pp 247-256, February 1982.
[ALBA86]
A. Albanese, G. DeGrandi, M.W. Garrett, "An Architecture for
Transparent MAN/LAN Gateways," IEEE International Conference on
Communications, Toronto, Canada, June 1986.
[DEGR86]
G. DeGrandi, M.W. Garrett, A. Albanese, T.H. Lee, "The Design and
Implementation of a Transparent MAN/LAN Gateway," Proc.
EFOC/LAN'86, Information Gatekeepers Inc., Amsterdam June 1986.
[FRAT87]
L. Fratta and A. Albanese,
"Service Integration for Interconnected LANs,"
The Third International Conference on Data Communication Systems and Their
Performance, Rio de Janeiro, Brazil, June 22, 1987.
[GARR86]
M.W. Garrett, J.O. Limb, A. Albanese, "Multiple Gb/s Fiber-Optic
Metropolitan Area Network," IEEE International Conference on
Communications, Toronto, Canada, June 1986.
[KARR87]
M. Karr, A. Albanese, M. Garrett, K.W. Loh, M. Maszczak,
"Experimental Gigabit Packet Communications Network,"
IEEE Optical Fiber Communications Conference,
Reno NV, February 1987.
[LIMB82]
J.O. Limb, C. Flores, "Description of Fasnet - A
Unidirectional Local Area Communications Network," Bell
Syst Tech Journal Vol 61, No 7, p 1413, Sept 1982.
[LOH86]
K. W. Loh, M. Karr, A. Albanese, W. C. Young, L. Curtis,
J. Baran, and L. McCaughan,
"Fail-Safe Nodes for Fiber Networks,"
Optical Fiber Conference, Atlanta, Georgia, February 24-26, 1986.
[STR87a]
L. Strigini, L. Fratta, and A. Albanese,
"Multicast Services on High-Speed Interconnected LANs,"
1987 Workshop on High Speed LANs,
Aachen, West Germany,
February 16-17, 1987.
[STR87b]
L. Strigini, L. Fratta, and A. Albanese,
"Explicit Offset Routing for Interconnecting High-Speed Networks,"
EFOC/LAN 87,
Basel Switzerland,
June 3-5, 1987.


TUCoPS is optimized to look best in Firefox® on a widescreen monitor (1440x900 or better).
Site design & layout copyright © 1986-2024 AOH