Preliminary Program

Sunday 6

Wine and cheese in the evening in Paris

Monday 7

  9:30    Welcome

10:00    Invited Speaker C. Teuscher (LANL)
            It's Time to Space Out: Computation and Communication in
            Massive-Scale Systems


11:00    Coffee Break

11:30    F. Gruau (Lab. LRI)
            The Foundation of Self-developing Blob Machines for Spatial Computing

12:15    H. Berry (INRIA Saclay)
            Neuro-inspired spatial structures for self-organized developing networks

13:00    Lunch Break

14:30    Invited Speaker G. J. Sussman (MIT)
            Evolvability, Robust Design, and Space

15:30    J.-L. Giavitto & O. Michel (Lab. IBISC & CNRS)
            Rule-based Modelling of Spatial Systems

16:10    Coffee Break

16:40    J. Lizier (U. of Sidney)
            Analysing the information dynamics of flocking

17:30    Short Presentations

            L. Maignan (INRIA)
            A 1D CA that moves particles until regular spatial placement

18:30    End of the day

Tuesay 8

9:00      Invited Speaker A. Adamatzky (U. of Bristol)
            To be confirmed

10:00    Coffee Break

10:30    N. Fates, A. Spicher & O. Simonin (LORIA)
            From Reactive Multi-Agents Models to Cellular Automata

11:15    R. Doursat (Lab. ISC-PIF)
            From Morphogenesis to Embryomorphic Engineering

11:55    S. Verlan (Lab. LACL)
            Membrane systems: computational and modeling power



Abstracts

C. Teuscher -  It's Time to Space Out: Computation and Communication in Massive-Scale Systems

Since the beginning of modern computer science some sixty years ago, we are building computers in more or less the same way. Silicon electronics serves as a physical substrate, the von Neumann architecture provides a computer design model, while the abstract Turing machine concept supports the theoretical foundations. That is changing: in recent years, unimagined computing devices have seen the light because of advances in synthetic
biology, nanotechnology, material science, and neuroscience. These novel devices typically share the following two characteristics: (1) they are made up from massive numbers of simple, stochastic components which (2) are embedded in 2D or 3D space in a disordered way. A grand challenge in computer science consists in developing computing paradigms, design methodologies, formal frameworks, architectures, and tools that allow to reliably compute and efficiently solve problems with such emerging devices.

In this talk, I will outline our visionary and long-term research efforts to address the grand challenge of building, organizing, and programming emerging computing machines. In particular, I will present efforts underway at LANL to self-assemble massive-scale nanowire-based interconnect fabrics for spatial computers. Using Random Boolean Networks (RBNs) as a showcase, I will then illustrate that irregular assemblies and specific interconnects can have major advantages over purely regular and locally interconnected fabrics. Last, I will present recent results on the robustness and critical connectivity of such non-classical computing architectures.

 F. Gruau - The Foundation of Self-developing Blob Machines for Spatial Computing

The current trend in electronics is to integrate more and more transistors on a chip and produce massive hardware resources. As a consequence, traditional computing models, which mainly compute in the temporal domain, do not work well anymore since it becomes increasingly difficult to orchestrate these massive-scale hardware resources in a centralized way.  Spatial computing is a unifying term that embodies many unconventional computing models and means computing on a relatively homogeneous physical medium made of hardware components, where the communication time is dependent on the euclidean distance between the components (locality constraint). This constraint makes the programming for high performance significantly more complex compared to classical non-spatial hardware because performance now depends on where computation happens in space (mapping problem). 

Blob computing is a new approach that addresses this parallel computing challenge in a radically new and unconventional way: it decouples the mapping of computations onto the hardware from the software programming while still elegantly exploiting the space of the
underlying hardware.  Hardware mapping of computations is done by a physical force-based approach that simulates forces between threads of computation (automata). Attractive forces are used to keep automata that need to communicate with each other closer while repulsive forces are used for load balancing.  The advantage of theses primitives is that they are simple enough to be implemented on an arbitrary computing medium.  They form the basis of a runtime system (RTS) that transforms an arbitrary computing medium into an easier-to-program virtual machine called the blob machine.  The basic objects of the blob machine are those automata, and the instructions let automata create new automata in specific ways so as to maintain a hierarchical organisation (which facilitates both the mapping and the programming).

We detail the basic instructions of the blob machine and demonstrate their confluence. Programming a spatial medium to perform a given algorithm then boils down to programming the blob machine, provided the RTS is implemented on it. The advantage of this approach is the hardware independency, meaning that the same program can be used on different media.  By means of several examples programmed using a high level langage description, we further show that we can efficiently implement most current parallel computing models, such as Single Instruction Multiple Data (SIMD), data parallelism, ``divide and conquer'' parallelism and pipelining which demonstrates parallel expressiveness. On sorting and matrix multiplication algorithms, we also show that our approach scales up optimally with the number of basic hardware components.

 H. Berry - Neuro-inspired spatial structures for self-organized developing networks

Carbon nanotubes are often seen as the only alternative technology to silicon transistors. While they are the most likely short-term alternative, other longer-term alternatives should be studied as well, even if their properties are less familiar to chip designers. While
contemplating biological neurons as an alternative component may seem preposterous at first sight, significant recent progress in CMOS-neuron interface suggests this direction may not be unrealistic; moreover, biological neurons are known to self-assemble into very large networks capable of complex information processing tasks, something that has yet
to be achieved with other emerging technologies. Furthermore, the structure of these biological networks may be a good inspiration source for the organization of future computing systems.

The first step to designing computing systems on top of biological neurons or inspired by their self-organized structures is to build an abstract model of self-assembled biological neural networks, much like computer architects manipulate abstract models of transistors and circuits. Here, we propose a simple model of the structure of biological neural networks that explicitly assumes growth in 3d-Euclidean space. We provide empirical evidence that this model matches the biological neural networks found in living organisms, including the small-world graph structure properties commonly found in many large and self-organized systems and biological neural networks. But we further show that the model emulates even finer structural properties (connectivity distribution, asymmetry, assortativity).

Because network growth in our model is governed by simple local rules and characteristics, it enables the development of potentially large-size but biologically realistic neural networks, as will be needed for complex information processing/computing tasks. Based on this model, future work will be targeted to understanding the evolution and learning properties of such networks, and how they can be used to build computing systems.

G.J. Sussman - Evolvability, Robust Design, and Space

It is hard to build robust systems: systems that have acceptable behavior over a larger class of situations than was anticipated by their designers.  The most robust systems are evolvable: they can be easily adapted to new situations with only minor modification.  How can we design systems that are flexible in this way?

Observations of biological systems tell us a great deal about how to make robust and evolvable systems.  Techniques originally developed in support of symbolic Artificial Intelligence can be viewed as ways of enhancing robustness and evolvability in programs and other engineered systems.  By contrast, common practice of computer science actively discourages the construction of robust systems.

Robust designs are built on an additive infrastructure: there are exposed interfaces for attaching new functionality without serious disruption of preexisting mechanisms.  Indeed, the ability to harmlessly duplicate a mechanism and then modify the copy to supply useful new functionality is one of the principal ploys appearing in natural evolution.  What are the preconditions that support such augmentation?  Can we engineers arrange our systems to be extensible in this way?  Are there exploitable analogies between the techniques that we have created to make extensible artifacts and the mechanisms that we find in biological systems?

One powerful idea in biological systems is the use of space to distribute function.  In principle, each cell of a multicellular organism is capable of living by itself.  But a multicellular organism is a community of cells that are spatially differentiated to build particular structures and to perform particular functions.  How is this arranged?  We now understand how this can this contribute to reliability, but do we understand how it contributes to flexibility?

I will address these and related issues, and show, in terms of explicit and concrete examples, how some of the insights we glean can inform the way we do engineering in the age of information.

J. Lizier - Analysing the information dynamics of flocking

We describe how to modify an existing framework for the quantitative analysis of the information dynamics of computation in ordered spatiotemporal systems to systems of amorphous homogeneous agents. As an example, we discuss how this could be used to analyse the manner in which the agents in a flocking model compute their collective motion, on a local scale in time for each agent, and on average as a function of a phase transition in the model.

L. Maignan - A 1D CA that moves particles until regular spatial placement

We consider a cellular automata with particle where in each sites, a bit encodes presence or absence of a particules. Starting from an arbitary intial configuration, our goal is to move the particules between neighbor sites until each particule is placed at the same distance from its neighbors. This problem, called ``regular placement'' problem, is the CA-equivalent of load-balacing in parallel computing. We present a CA rule that solves this problem in the 1D case, and is convergent, i.e, once the regular placement is achieved, the configuration does not change. The rule is inspired from the Lloyd algorithm, computing a centroidal voronoi tesselation. The dynamic of the rule is described using self explanatory spatio-temporal diagram. They show that particles move upon reception of signals carrying the energy of the system. Each signal bounce or pass tough particle until it enventually reaches the border and vanish. When signals have all vanished, particles are regularly placed.

N. Fates, A. Spicher & O. Simonin - From Reactive Multi-Agents Models to Cellular Automata

Extended abstract available here.

R. Doursat - From Morphogenesis to Embryomorphic Engineering

Extended abstract available here.

S. Verlan - Membrane systems: computational and modeling power

Membrane systems (called also P systems)  are distributed parallel computing devices inspired by the structure and the functioning of a living cell. The cell is considered as a set of compartments (membranes) nested one in another and which contain objects and evolution rules that evolve objects and perform the communication between membranes. The topology, the spatial distribution of objects and their interactions have a direct influence on the properties of the system. There exist numerous variants of membrane systems  using different ingredients which lead to different computational power and make membrane systems suitable not only for theoretical investigations but for modeling biological phenomena as well.

In this talk we will give an overview of the field and we will introduce principal classes of membrane systems.