Hirose: Connecting the future
Industrial Ethernet Book Issue 66 / 44
Request Further Info   Print this Page   Send to a Friend  

Ethernet-based fieldbuses for industrial networks: the basics

From its base as a universal office network established 30 years ago, Ethernet has been moving into industrial applications for a decade. Drives, industrial automation, aircraft, railway vehicles and others are increasingly being interconnected and controlled via Industrial Ethernet. In this primer written for rookie network engineers, Reiner Grübmeyer and Stephan Rupp introduce Ethernet-based industrial control system architecture and some of its specialised variations.

INDUSTRIAL AUTOMATION and vehicle control demand real-time performance and safety. So far, Ethernet has been growing and meeting new demands in an evolutionary way. This article summarises the requirements and solutions for Ethernet-based control systems designed for industrial environments.

Since the 1980s, Ethernet has been spreading fast as a universal medium to interconnect computers and all device types. One reason for this success is the evolutionary approach. In this way, the basic specification of both Layer 2 protocols MAC (Medium Access Control, IEEE 802.3) and LLC (Logical Link Control, IEEE 802.2) can be extended by further control protocols, such as IEEE 802.1 that - among others - contains the Spanning Tree Protocols, VLAN and port based access control, as well as application specific extensions (IEEE 802.4 and higher).

Switches and layers

An Ethernet switch's function is based on the Ethernet message format (Ethernet/bridging protocols): Layer 2 protocols. This layer contains local network addresses (MAC addresses). Layer 2 protocols include Link Aggregation (802.3ad), VLANs (802.1Q), Spanning Tree (802.1D, 802.1w), QoS (802.1p), flow control (802.3x), as well as GVRP (Dynamic VLAN Registration) and GMRP (Dynamic L2 Multicast Registration). Ethernet-switches allow all ports to operate at their nominal speed.

Typically, local networks are operated as IPsub networks. This allows use of private IP addresses for hosts and devices in home and office networks. This means that the network can be structured flexibly at the Layer 3 Internet Protocol. In such a network, IP routing protocols may be applied. More sophisticated Ethernet switches also support Layer 3 protocols such as OSPFv2, RIPv2, VRRP, IGMP Snooping, IPv4 Forwarding, DiffServ, ARP, ICMP, as well DCHP Client/Server to receive or distribute IP addresses.

Because of their many configuration options, advanced Ethernet switches need a user interface, which allows operation via a command line interface (CLI), remote terminal (TELNET), or browser (webserver), as well as a management interface (SNMP). Devices with these options are known as 'managed switches'. Such switches also receive an IP address because they need to be addressed as hosts for configuration.

Irrespective of protocol diversity, every Ethernet switch operates in basically the same manner. Acting in a similar way as a post office, a switch receives messages at one of its ports (message in), analyses the destination address of the message, and forwards it to a suitable port (message out). The source address and destination address (MAC addresses) of each message (Ethernet frame) is contained in the frame header.

The switch matches ports to addresses from the source addresses that are received at the input port. Addresses and matching ports are maintained in a table (Switch Route Table), which can be looked up each time a message is received. If a destination port is known, the message is forwarded to this matching port only. Otherwise, the message is repeated to every port. Figure 1 shows the storage and forwarding of messages.


Fig. 1. (left) The storage and forwarding of messages: A switch receives messages at a port, analyses the destination address and forwards it to a suitable port. Each message's MAC address is contained in the frame header.

The header of an Ethernet message (IEEE 802.3 Frame) contains further information beyond source and destination addresses, such as VLAN tags, which qualify different traffic classes. This information classifies a category or group a message belongs to and may also decide which way to forward the message. Depending on its classification, a message can be placed in a priority queue on a port. If a message is part of a multicast group, it will be copied and forwarded to all members (destination addresses) of the group.

Note that multicast is the delivery of a message or information to a group of destination computers simultaneously in a single transmission from the source, creating copies automatically in other network elements, such as routers, only when the topology of the network requires it.

In the same way, IP header information, which is part of an Ethernet frame's payload, can be accessed by the switch and used for the classification and handling of messages. This allows mapping IP-multicasts into MACmulticast over Ethernet ports. This is called IGMP snooping and it is used in telecommunication networks to reduce unicast streams for IPTV. Ethernet switches, which support Layer 3, sometimes provide such features.

State-based information

The features mentioned so far need no information about contexts beyond the individual Ethernet frame. An example is whether a frame belonging to a specific HTTP or Session Initiation Protocol (SIP) session would be state-based information. In a state based communication system, messages represent the entire state of a node. A sequence number represents state-based information, which the switch would need to store in terms of an individual frame context. The handling of such contexts needs to be by a CPU and is beyond the scope of pure Ethernet switches. Such Layer 3 information is handled by routers, which are usually implemented as software on a CPU, e.g. based on a Linux distribution.

Layer 3 switches can handle such state-based information, not on the switch silicon, but by the switch controller's software. Managed switches, which provide a switch controller for user configuration, can implement Layer 3 functions such as NAT (network address translation). This is frequently used in IPv4 addressed domestic and private networks to translate private IP addresses into public IP addresses.

The traffic throughput for Layer 3 functions depends on the switch controller's CPU power. Usually, managed switches are designed as wire speed Layer 2 switches with limited Layer 3 performance, which is sufficient for DHCP or NAT over 10 Mbps DSL lines. Wire Speed Layer 3 and Layer 4 features such as VPN, encryption, firewalls or deep packet inspection are implemented on high-performance multi-core CPUs, which are also used for servers and embedded servers. There are also multi-core CPUs that specialise in packet processing. In such cases, the Layer 3 and Layer 4 functions are completely separate from Layer 2 Ethernet switches.

Forwarding a message to a group of receivers is shown by Fig. 2 (previous page). Here, the sender sends a message to a multicast address (i.e. an address representing the group). The Ethernet switch resolves the multicast address into individual addresses and repeats the message to all respective ports. This is much faster and more efficient because it avoids generation of many unicast messages by the sender.


Fig. 2.Multicast addressing a group of receivers: The delivery of a message to destination computers simultaneously in a single transmission, creating automatic copies only when the topology needs it.

The use of VLAN-tags within an Ethernet frame's header allows bigger LANs to be broken into segments, or a virtual LAN (VLAN). Figure 3 illustrates this using the colours as VLAN tags on Ethernet frames and switch ports. In this case, the Ethernet switch is configured such that the ports are allocated to specific VLANs. The switch marks all incoming messages to such a port with the corresponding VLAN tag and forwards packets received with a VLAN tag only to those ports that match the corresponding VLAN. For traffic from multiple LAN segments, ports can be configured to provide such interconnection: the trunks.


Fig. 3. Ethernet switch configuration: This illustrates VLAN tags on Ethernet frames and switch ports as colours. The Ethernet switch is configured such that ports are allocated to specific VLANs.

Multiple LAN segments can also be implemented physically using enough Ethernet switches. In virtual LANs, the segmentation is achieved by configuring the switches accordingly.

Layer 2 features like multicast, VLAN or priority queues for traffic classes are entirely based on information contained in individual frames, the port configuration and address tables. They do not need any state-based information and can be performed at wire speed by Layer 2 switches.

Embedded networks

Ethernet switches for embedded systems are implemented to match specific environmental conditions: such as an extended temperature range of between -40 to +70¡ãC. Embedded switches and routers are also hardened to withstand greater electromagnetic irradiation and support strict limits for its emission. Switches used in vehicles, aircraft and industrial plant must also withstand high shock and vibration, and be resistant to dust, humidity and many other hazardous substances. Moreover, embedded systems are frequently adapted to individual customer requirements and must match specific needs in terms of product availability, maintainability, service and repair.

The composition of an embedded switch or router follows the process shown in Fig. 4. Depending on the application and customer specific requirements, the feature set is defined and an appropriate hardware platform is chosen.


Fig. 4. Showing the composition of an embedded switch or router: Depending on the application and customer specific requirements, the feature set is defined and an appropriate hardware platform chosen.

With embedded systems, hardware development starts from a well-established base by using hardware standardised and supplier specific form factors.

Topology

In industrial environments there are network functional requirements beyond the scope of office networks. These include practicality, such as avoiding the hassle of a star topology for cabling, or dual star topologies for redundancy. In an industrial environment, a linear network topology is usually the better choice, and a ring topology provides the redundancy to heal single failures such as broken links.

Real Time (RT) operations

If Ethernet is used as a field bus to control manufacturing equipment or vehicles, the network must respond in a certain time. An industrial environment typically demands RT responsiveness, i.e. responses within a specified interval. Figure 5 illustrates the environment and RT requirements. As shown on the right, a field bus transmits control messages from a controller to an actor (e.g. a drive), and from a sensor to the controller.

Message transfer times between devices and the controller are subject to variable delays called 'jitter'. This behaviour follows statistical models such as the Poisson distribution illustrated in Fig. 5 (left part).


Fig. 5. Illustrating realtime requirements in an industrial environment: As shown on the right, a field bus transmits control messages from a controller to an actor (e.g. a drive), and from a sensor to the controller.

While a constant delay can easily be compensated for in an industrial control, delay variations cannot. The challenge is to keep the jitter low with respect to the response times, which vary according to the application. Controlling drives needs response times below 1ms. For other devices, response times of 10ms are sufficient. Machines or vehicles operated by a user terminal typically need response times in the same order of magnitude as the human response time: about 100ms.

RT performance typically demands deterministic response times. Sensors and devices do not change their state within a deterministic threshold. But when knowledge of their state is requested in a cyclic interval, they do respond. The response times that an Ethernetbased field bus can achieve depends upon network size and topology, transmission speed, and message size in Bytes. One reason is the storage and forwarding of messages within each switch. Deterministic response times cannot be guaranteed by Ethernet without fixing extra conditions.

In an industrial environment, a field bus also will need to handle different traffic classes, such as process data relevant to control, diagnostics data and status messages, as well as other traffic. Response times matter for control process data. Other traffic is less critical, or not critical at all, so may travel in a lower class with a lower priority.

Meeting application requirements

For industrial applications, different approaches are needed to meet specific requirements. Some of these approaches are compatible with IEEE 802 and some are not. The handling of traffic classes, for instance, is part of regular Ethernet and IP standards as applied to Internet-based telecommunication networks. Under Quality of Service (QoS), suitable methods for handling different classes of traffic are described both for Layer 2 (IEEE 802.1p) and for Layer 3 (DiffServ, corresponding to RFC 2474 and 2475, as well as some supplementary RFC).

The principle (Fig. 6) is based on the classification of messages (Ethernet frames or IP packets) in a specified field in the respective message header. At each node (Ethernet switch or IP router) messages are handled according to their traffic class. Higher class messages are handled with priority. The procedure corresponds to an airline check-in: for each flight (port) there is a counter and a queue for each traffic class. The highest class passenger (message) is dispatched with priority and forwarded to the gate first. This procedure rearranges the order of messages to a specific destination according to their priority (class of service) at each node (switch or router).


Fig. 6. Showing priorities for different classes of service: The principle is based on the classification of messages (Ethernet frames or IP packets) in a specified field in the respective message header.

This procedure cannot guarantee a specific response time, which depends upon the total traffic volume. Regarding the highest class of service, the amount of process data that needs to be communicated in this class is known, so the traffic situation within the class can be regulated using suitable cycles and the corresponding volume of data and messages.

However, there is interference with traffic from the lower classes. For instance, while a secondclass message ('Business' class in Fig. 6) is in transmission over the destination port, the first class message ('Senator' class in Fig. 6) needs to wait. The delay depends on the length of the lower class message. The length of Ethernet frames can vary significantly from between 64 and 1518 Bytes (9000 Bytes for 'Jumbo' frames). IP packets may extend up to 64KB.

By not discarding Jumbo frames, the length of 'vehicles' on the data highway varies by a factor of 1000. At 100Mbps transmission rate (Fast Ethernet), a Jumbo frame takes 0.7ms to transmit. Depending on the number of nodes between sender and receiver, this situation may repeat and so accumulate delays. This, in turn, can generate high jitter.

The transmission of the 64-Byte control message would have taken just five microseconds. In addition, the same effect leads to the solution for providing RT responses with deterministic response times and limited jitter while using regular QoS features over Ethernet or IP. In such a control applications network, the maximum packet length should be limited to a suitable size, such as a maximum of 512 Bytes/message, corresponding to a 40ms transmission time. All senders should keep to this limit. Total delay and jitter then depend on the number of network nodes. In a control application, the network topology and number of control segment nodes can be designed accordingly.

A different approach projects traditional field bus principles on to Ethernet-based field busses. With the former, all senders (sources of process data) and receivers (sinks of process data) share the bus as joint communication medium. In such an arrangement, a moderator (bus master) arranges communication by placing requests for information to individual participants in a controlled way. The bus master does not necessarily represent the controller or master: it just arranges the communication flow (Fig. 7).


Fig. 7. Orchestrating deterministic behaviour: A moderator or bus master arranges communication between participants by placing requests for information to the individual participants in a controlled way.

To obtain the greatest attention for process data, a specific time slot is reserved for process data (deterministic part), and all other communications will move to a second time slot (asynchronous part). The time slots are repeated cyclically. Within the deterministic interval, the bus master organises the flow of information between process data sources and sinks.

The asynchronous interval can be used to transmit Ethernet traffic the usual way. While all messages are transferred as standard Ethernet frames, the multiplexing of time slots used like this needs special Ethernet switches, which support the specific multiplex. This concept is, therefore, compatible with regular Ethernet specifications and allows the transference of regular Ethernet traffic over the field bus segment, within which special Ethernet switches need to be used. These implement the time division multiplex feature.

Another approach assumes a linear bus topology and packs all data intended for all devices down the chain into one single message. The format of the data is specified as a structure within the payload section of the Ethernet frame, which keeps the Ethernet frame according to IEEE 802 without modifications. Because process data is communicated as part of the payload section, any regular Ethernet switch can carry the message containing the process data.

Within a linear field bus section - a daisy chain - all devices connect with a specific bus coupler. The bus couplers forward the frame to the next bus coupler further down the chain. While forwarding the message, the bus couplers access the process data and withdraw any information intended for the specific device, and include any output of the specific device into the process data structure. At the end of the chain, the message is returned back up it, and on its way it is merely passed to the next device upstream, without access to the process data being required.

The concept also is compatible with IEEE 802. However, in the field bus section, it requires special bus couplers able to extract and fill-in process data while forwarding the message further downstream. Process data is handled at application layer; there is no impact on Layer 2. Upstream, messages are passed exactly like ordinary Ethernet frames.

In order to interconnect different network segments or field bus segments, linear topologies provide the least effort in cabling. However, if a link fails, all devices behind the broken link are inaccessible.

Continue reading part 2 of this article Reiner Grübmeyer and Stephan Rupp work for Kontron.

www.kontron.com


Source: Industrial Ethernet Book Issue 66 / 44
Request Further Info    Print this Page    Send to a Friend  

Back

Sponsors:
Analog Devices: Time Sensitive Networking
DINSpace fiber optic and Cat 6 patch panels
Japan IT Week Autumn

Get Social with us:



© 2010-2018 Published by IEB Media GbR · Last Update: 18.11.2018 · 33 User online · Privacy Policy · Contact Us