Introduction

The powerful functionality offered by memory mapped interconnect hardware is well suited to implement low latency, high-performance distributed solutions. Writing the required device drivers and control software to initialize and manage memory mapped interconnects from scratch is a significant job and requires significant system and interconnect hardware experience and knowledge.

The goal of the SISCI API is to create a portable software stack, which by design does not add significant overhead to data transfer operations, and significantly simplifies the use of advanced remote memory access networks.

For an application developer, the SISCI API represents a rich interface to memory mapped hardware and lower-level software services.

The SISCI API

The SISCI API was one of the main outcomes of the EU-funded Esprit Project 23174, “Standard Software Infrastructures for SCI-based Parallel Systems”, whose purpose was to encourage the development of software support for parallel processing on clusters of PCs or workstations connected with a fast “memory mapped” interconnect, initially for the SCI (Scalable Coherent Interface).

The SISCI API supports data transfers based either on distributed remote memory access or on Direct Memory Access (DMA). It also allows users to trigger remote interrupts and to catch and handle events generated by the underlying interconnect system (such as a network cable being unplugged). Data transfers can be between system memories and IO devices connected by the underlying memory mapped interconnect. The SISCI API has proven portable and very valuable for a number of various types of memory mapped interconnect solutions like SCI, ASI, and PCI Express.

The SISCI API is currently available for the following interconnects from Dolphin and OEM partners:

  • SCI (initially with a SBus and PCI interface)

  • DX (Based on ASI / StarGen PCI Express Gen1)

  • IX (Based on IDT PCI Express Gen2 chips)

  • PX (Based on PLX/Avago/Broadcom PCI Express Gen2 and Gen3 chips)

  • INX (Intel NTB enabled server systems)

  • MX (Based on Microsemi Switchtec chips)

  • Various OEM solutions based on standard PCIe chipsets from IDT, Microsemi and PLX

System security

The regular SISCI API functionality and implementation are designed to create an easy to use and safe environment. The functionality is implemented in close interaction with standard IOMMU functionality and lower level drivers to prevent malfunctioning software (e.g. software bugs) from accessing remote memory outside of exported SISCI segments. This is also true after hot-plug events and system reboots, even if the customer application software is not designed to fully support such events.

Interoperability

The SISCI API is designed to be system architecture and operating system independent, allowing users to write portable applications that communicate across systems without adding overhead or performance penalties.

The SISCI API supports connecting both little and big-endian systems – the interpretation of data in bi-endian systems is up to the application programmer.

Resources and resource dependencies

The SISCI API makes extensive use of the “resource” concept: a virtual device is a resource, a memory segment is a resource, a DMA queue is a resource, and so on. The list of available resources will become clear going through this guide.

A resource is usually associated with a number of properties, which are collected in a descriptor. The contents of a descriptor, i.e. the resource properties, are not directly visible to a user, who needs to use appropriate API functions to manage them. In other words, a descriptor is opaque to a user. A descriptor handle is provided to the user, which is passed to the API functions.

Names of descriptors and handles are chosen after the resource name. For a local segment, for instance, the descriptor is called sci_local_segment*,* and the handle is called sci_local_segment_t.

Resources may depend on other resources. For example, the function that creates a local segment needs a reference to an open virtual device, meaning that a local segment depends on a virtual device.

The dependency implies that a resource should not be freed if there is another one relying on it; doing so would generate an error. Using the example above, a virtual device cannot be closed until all the local segments associated to it are released.

About this guide

This document will guide you through the fundamental features of the SISCI API V2. At the end, you will be able to manage memory segments, transfer data from one node to another several different ways, and send interrupts remote processes.

We will start with addressing some generic aspects such as initializing the SISCI API library and querying information about the interconnect fabric. This is presented in the System aspects chapter. You will learn basic memory management in the Memory segments chapter. This includes how to allocate memory segments, how to make them available to other nodes, and how to connect to a remote memory segment. When you have completed the basic memory management section, you will be ready to perform data transfers between several nodes.

Data transfer methods are described in the Accessing memory chapter and in the DMA chapter.

A memory mapped interconnect system may contain several nodes sharing a global memory structure. Interrupts may be used for synchronization. Interrupts are a fast way of notifying another node that something has occurred and you will learn how to use them in the Interrupts chapter.

Finally, the Advanced features chapter deals with things like managing events and checking for data transfer errors.

We recommend keeping the SISCI API Functional Specification at hand when reading this guide. In particular, the specification will be useful as a reference for function prototypes and for the list of possible errors generated by a function call.

This guide is covering the functionality available with SISCI API version 2.

Examples in this guide

The textual explanations are enriched by C code excerpts to show how things are used in practice; from time to time, whole programs implementing a send-receive example are included in order to summarize the concepts explained so far (such programs are supposed to correctly compile and run). The choice of a send-receive pattern for the examples is motivated by its simplicity, the interconnect hardware and software of course allow you to do much more than that.

A program making use of the SISCI API library must include the header files sisci_api.h and sisci_error.h. For simplicity, this is done only in the full programs but not in code excerpts. Please refer to the documentation that comes with the software distribution to find out where this header file is located and for additional information on how to compile and link SISCI applications.

Applications that want to make use of “callbacks” (more on this later in this guide) must be compiled using the D_REENTRANT compiler flag.

Even if they are not part of the API, many examples that you find both in this guide and the software distribution make use of the following constants:

#define NO_FLAGS 0
#define NO_CALLBACK 0
#define NO_ARG 0

We also assume that the following identifiers are defined. Their values are not important but must be legal:

  • ADAPTER_NO is the identifier of the interconnect adapter card. This guide assumes there is only one card present on each system, and that the identifier is the same on all systems. A system can have several adapters, each identified with a host-wide unique adapter number.

  • SENDER_NODE_ID and RECEIVER_NODE_ID are the SISCI node identifiers of the two nodes involved in the example code. Each node connected to the network is assigned a unique node identifier.

  • SENDER_SEG_ID and RECEIVER_SEG_ID are the identifiers (segment IDs) of the SISCI memory segments created on the two nodes of the example code. The sizes (in bytes) of the two segments are SENDER_SEG_SIZE and RECEIVER_SEG_SIZE respectively.

Examples in the software distribution

The SISCI software is normally distributed with a number of SISCI tests, benchmarks and example programs. SISCI programmers are encouraged to study these examples and play with the benchmark tools to learn more about the SISIC API and the power of remote memory accesses.