Reducing message-length variations in resource-constrained embedded systems implemented using the Controller Area Network (CAN) protocol

  • Authors:
  • Mouaaz Nahas;Michael J. Pont;Michael Short

  • Affiliations:
  • Embedded Systems Laboratory, University of Leicester, University Road, Leicester LE1 7RH, UK;Embedded Systems Laboratory, University of Leicester, University Road, Leicester LE1 7RH, UK;Embedded Systems Laboratory, University of Leicester, University Road, Leicester LE1 7RH, UK

  • Venue:
  • Journal of Systems Architecture: the EUROMICRO Journal
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Controller Area Network (CAN) protocol is widely used in low-cost embedded systems. CAN uses ''Non Return to Zero'' (NRZ) coding and includes a bit-stuffing mechanism. Whilst providing an effective mechanism for clock synchronization, the bit-stuffing mechanism causes the CAN frame length to become (in part) a complex function of the data contents: variations in frame length can have a detrimental impact on the real-time behaviour of systems employing this protocol. In this paper, two software-based mechanisms for reducing the impact of CAN bit stuffing are considered and compared. The first approach considered is a modified version of a technique described elsewhere (e.g. Nolte et al. [T. Nolte, H.A. Hansson, C. Norstrom, Minimizing CAN response-time jitter by message manipulation, in: Proceedings of the Eighth IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS 2002), San Jose, California, 2002]). The second approach considered is a form of software bit stuffing (SBS). In both cases, not only the impact on message-length variations is addressed but also the implementation costs (including CPU and memory requirements) involved in creating practical implementation of each technique on a range of appropriate hardware platforms. It is concluded that the SBS technique is more effective in the reduction of message-length variations, but at the cost of an increase in CPU time and memory overheads and a reduction in the available data bandwidth. The choice of the most appropriate technique will, therefore, depend on the application requirements and the available resources.