TC 10 - Computer Systems Technology - Aims and Scopes

est. 1976, revised 1987


The Aims of the Committee are the promotion of the State-of-the-Art and the coordination of information exchange on concepts, methodologies, and tools in the stages in the life cycle of computer systems.  


*    system and component concepts, architecture and organisation;

*    specification, design and verification methodologies of computer systems;

*    logical design and fabrication of components and systems;

*    evaluation of the parameters of computer systems and components;

*    reliability;

*    assessment of emerging technologies;

*    application specific computer systems and components including peripherals. 

WG10.2 – Embedded Systems
est. 2006


The WG10.2 shall be constituted as a group under the sponsoring organization with the following basic aims: 

*    to be the internationally open reference group for all aspects of embedded system design promoted and sponsored by the sponsoring organization of the WG10.2:

*    to further the dissemination and exchange of information and experience on research and applications in the area of embedded systems;

*    to address ES designers and researchers from both, industry and academia;

*    to encourage education in all areas of embedded systems;

*    to further the interdisciplinary character of embedded systems, that encompasses hardware (system on a chip), real-time software, real-time operating systems, control theory, intelligent features, dependability issues.


Embedded systems are gaining increasing importance in all aspects of engineering. It is expected that in the near future roughly no technical artifact will exist without embedded information technology. There is a tendency to software oriented embedded and/or dependable systems, based on standardized micro-controller cores. This implies that the design of embedded real-time software and real-time operating systems will play a dominant role in this field. As more and more networks of micro-controllers are applied, real-time communication systems and in general the design of distributed embedded systems will gain importance. As high-performance embedded computing components have become available the challenges of designing embedded systems have become more acute.

The scope of WG10.2 comprises in detail to:

*    organize events in the area of ES (e.g. DIPES (Distributed and Parallel Embedded Systems));

*    seek co-operation with user and interest groups as well as with ES-oriented groups within IFIP and other societies;

*    discuss, disseminate and exchange information on ES-related standardization activities;

*    study and encourage curricula on ES design;

*    initiate and organize new ES-related activities.

WG10.3 - Concurrent Systems
est. 1978, revised 1979, 1988, 2006


The study of computer systems, having several computing elements, with the goal of improving the quality of attributes such as cost, performance, programmability, extendability and functionality.

The study includes the interrelation software/firmware/hardware in specification, design and implementation. 


*    Exploration of problem areas and solutions pertaining to the interrelation between the hardware functions and the software functions in systems such as supervisors, data management, language translators, I/O systems, and user interfaces.

*    Evaluation of the implementation of trends in computer systems technology on the interrelation of software, firmware and hardware.

*    Evaluation of the implication of this interrelation in the trends in computer systems technology.

WG10.4 - Dependable Computing and Fault Tolerance
est. 1980, revised 1988


Increasingly, individuals and organizations are developing or procuring sophisticated computing systems on whose services they need to place great reliance. In differing circumstances, the focus will be on differing properties of such services - e.g. continuity, performance, real-time response, ability to avoid catastrophic failures, prevention of deliberate privacy intrusions. The notion of dependability, defined as that property of a computing system which allows reliance to be justifiably placed on the serve it delivers, enables these various concerns to be subsumed within a single conceptional framework. Dependability thus includes as special cares such attributes as reliability, availability, safety, security. The Working Group is aimed at identifying and integrating approaches, methods and techniques for specifying, designing, building, assessing, validating, operating and maintaining computer systems which should exhibit some or all of these attributes.   


Specifically, the Working Group is concerned with progress in:

*    understanding of faults (accidental faults, be they physical, design induced, originating from human interaction; intentional faults) and their effect;

*    specification and design methods for dependability;

*    methods for error detection and processing, and for fault treatment;

*    validation (testing, verification, evaluation) and design for testability and verifiability;

*    assessing dependability through modelling and measurement. 

WG10.4 SIG on Education in Resilient Computing
est. 2009


The primary aims of the SIG are:

*    To acquire knowledge on how Resilient Computing is taught today in different worldwide higher educations institutions;

*    To compare the experiences so to provide an incremental process towards the structuring of an educational track in Resilient computing;

*    To promote the outcomes of the SIG to update or change or start proper tracks in Resilient Computing in higher educations institutions;

*    To interact with\international bodies working on educational issues i.e. ACM, IFIP, etc., to present the outcomes of the SIG;

*    To collect and make accessible, through the web, support material useful to cover the several disciplines relevant to Resilient Computing

*    To build and maintain a comprehensive database of material, available to the community of students, scientists, industrial designers and regulatory bodies


The adjective resilient has been in use for decades in the field of dependable computing systems essentially as a synonym of fault-tolerant, thus generally ignoring the unexpected aspect of the phenomena the systems may have to face. These phenomena become of primary relevance when moving to systems like the future large, networked, evolving systems constituting complex information infrastructures – perhaps involving everything from super-computers and huge server “farms” to myriads of small mobile computers and tiny embedded devices, with humans being central part of the operation of such systems. Such systems are in fact the emergence of the ubiquitous systems that will support Ambient Intelligence.

From an educational point of view, very few Universities, if any, are offering a comprehensive and methodical curriculum that is able to provide students with a multi-disciplinary preparation that makes them able to cope with the challenges posed by the design of ubiquitous systems. Multi-disciplinarily spans over dependability, security, usability, human factors, legal issues and ethics. Thus, from the educational point of view there is the need to scale-up the spectrum of topics offered, to identify the best curricular structure to make successful both teaching and learning processes.

It is thus relevant to have an open worldwide forum in which the different educational approaches to teaching Resilient Computing are presented, compared and discussed to reach an agreed approach to this issue.

In addition it will be very valuable to collect together in a open and public database all available support material (as lecture’s slides, textbooks, relevant literature, links to useful sites, etc.) that covers the different facets of multi-disciplinarily.

A first attempt to offer to our community a proposal for an MSc curriculum in Resilient Computing and gather extended support material has been done very recently in a European Network of Excellence ReSIST; the material is accessible at

WG10.4 SIG on Concepts and Ontologies
est. 2009


*    1. To take part in the development of the updated Computing Classification System (CCS) that is undertaken by the ACM to assure that our domain of interest is properly represented, since that was not the case in the two previous versions (1988 and 1998) of the CCS.

*    2. To develop a thesaurus and an ontology that integrates the concepts of dependability, security, resilience, robustness, trustworthiness, survivability, high confidence, information assurance, self-healing (and possibly other related terms)  and identifies their similarities and differences.

*    3. To employ document clustering algorithms and other classification techniques in order to create a methodology for automatic identification of related documents from all the domains listed in Aim 2 above.  To use the methodology in developing automatic tools that assist researchers and referees in creating and evaluating new research results.

*    4. To use advanced natural language processing (NLP) tools and to collaborate with artificial intelligence experts of the computational linguistics and knowledge representation domains  in the pursuit of the above Aims 2 and 3.

*    5. To use our experience in order to promote the formation of an IFIP activity aimed to create a thesaurus, an ontology and a classification system for the entire field of informatics ( computer science and engineering), possibly in collaboration with the ACM.


Dependability has naturally concerned most disciplines of computer science and engineering (informatics) since the early days. As a consequence, significantly different terminologies were developed by different communities to describe the same aspects of dependability. The terminologies became entrenched through usage at annual conferences, in books, journals, research reports, standards, industrial handbooks and manuals, patents, etc.

As an illustration, we have the concepts of dependability, security, trustworthiness, survivability, high confidence, resilience, information assurance, robustness, self-healing, etc., whose definitions appear to be identical or to overlap extensively. In many cases the definitions themselves have multiple versions that depend on a given author’s preference.

An example of a long-term effort to create a framework of dependability and security concepts is the effort within IEEE CS TC/DCFT and IFIP WG 10.4 that since a special session at FTCS-12 in 1982 has resulted in a series of papers, a six-language book, and in 2004 a “Taxonomy” paper in vol.1, no.1 of the IEEE Transactions on Dependable and Secure Computing. No other community has produced such a taxonomy.

The description of a domain by several synonyms or near-synonyms that lack well-defined distinctions is a source of continuing confusion that leads to re-inventions and plagiarism, impairs the transfer of research results to practical use in industry and impairs the recognition of related documents.

The orderly progress of dependability research and its practical applications requires that past work as well as new results should be classified on the basis of a single ontology and thus made accessible to the entire profession. However, it is unreasonable to expect that a committee formed by the different communities could by volunteer effort create a taxonomy document from which a single consensus ontology could be generated.

It must be concluded that today the purely “intellectual” (i.e., human) process of ontology building for dependability concepts is reaching its limits. The complementary solution is to augment the human effort by the use of automatic natural language processing tools that have been developed by computer linguists. The next step must be computer-aided building of a consensus ontology.

During the past decade much progress has been made in the development of computer tools for human language processing. Such tools have been developed for the extraction of term candidates from a corpus (set of texts). A thesaurus (list of important terms with related terms for each entry) is constructed from the term candidates. The ontology for a given domain is a data model that represents those terms and their relationships.    Automatic indexation of the texts is carried out using the thesaurus, followed by clustering analysis using statistical and linguistic techniques. A measure of similarity between texts is computed that serves as a basis for automatic classification. The applicability of the above listed techniques to texts in the dependability domain has been part of research supported by the European Network of Excellence ReSIST (Resilience for Survivability in Information Society Technologies) in 2006-2009. 

The corpus is composed of the texts of nearly 2000 papers presented at all 29 FTCS and 7 DSN conferences (1971-2006).The encouraging results of the processing of texts from the FTCS/DSN community leads to the conjecture that similar processing of texts from other conferences, journals, books, industrial documents, etc., will produce other ontologies that can be merged into a consensus ontology that covers the entire domain of dependability and its near-synonyms.

A dependability ontology is an integral part of an (still non-existent) ontology for all of computer science and engineering.  The only existing and widely used taxonomy that could be used to build it is the ACM Computing Classification System (CCS).  The CCS was created in 1988 and was last revised in 1998. It has fallen far behind the evolution of CS&E and information technology. The concepts of dependability are treated very inadequately, and many significant dependability terms are altogether missing in the 1998 ACM CCS taxonomy.

The coming update of the CCS is a challenge to the dependability community: we must take part in the process of creating an up-to-date and evolvable version of the CCS that adequately incorporates dependability concepts. The new CCS would allow the computer-aided construction of a thesaurus and an ontology for the entire CS&E profession. However, a consensus dependability ontology with explicit synonymy relations must be available to the CCS builders.

Finally, it is very appropriate for IFIP to take part in the building of a CCS. The experience of the SIG can serve as a starting point for such an effort within IFIP.


WG10.5 - Design and Engineering of Electronic Systems
est. 1981, revised 1988, merged with WG 10.2 in 1994, rev. 2003


Electronic system design demands a tight integration on a very large profile of knowledge and skills ranging from hardware and software system architecture to semiconductor physics.
Functionality of complex embedded or stand-alone systems, to be applied in areas such as general-purpose computing, telecommunications, automotive, entertainment, and multimedia, may be realized by various combinations of analog and digital hardware and software parts.
Systems can be implemented by single or multiple integrated circuits and software modules that can be either of special purpose, programmable or reconfigurable.
The working group aims at providing a forum amongst creative experts to explore problem areas and solutions for the design of such complex electronic systems and also disseminating the solutions to a broader industrial and educational sphere.  


The Working Group is interested in a broad range of topics related to the design and engineering of heterogeneous systems, containing hardware, software, and even mechanical parts.

*    System Design Methods

*    Embedded Systems

*    Modeling and Specification

*    Design Validation

*    Formal Methods in Design

*    Synthesis

*    Design Environments

*    Reconfigurable Computing

*    VLSI Systems and Applications

*    Physical Design

*    Test and Testability

*    Power-aware Design

*    Analog and Mixed-Signal Systems

*    Fundamental CAD Algorithms