December 15, 2007

DiaMeta

So, today I continue blogging here. Kindly excuse the fairly long period without a post. The submission deadline for the conference GTVMT was so rapidly approaching...

The next talk in our seminar was about DiaMeta, the successor of the diagram editor generator DiaGen. In contrast to DiaGen DiaMeta is based on a metamodel instead of a hypergraph grammar. Not that metamodels are more powerful: this truly is not the case. But people are so familiar with drawing class diagrams aka metamodels that nowadays metamodel-based tools are more popular.

DiaMeta is based on the Eclipse Modeling Framework we already have discussed (there also is an experimental version based on MOF, see references below). This is very benefitial from an application integration point of view, because the abstract syntax of diagrams can be serialized in standard compliant XMI.

The most interesting point to know about syntax analysis is that it is basically performed by solving a constraint satisfaction problem. This important correlation has been noticed by Minas 2006. Therefore DiaMeta has to make heavy use of the powerful reflective features EMF provides.



The picture shows the overall DiaMeta architecture. There are two important differences to DiaGen. The first one is that the reducer does not create a reduced hypergraph model any more, but instead a so-called instance graph. This is a graph whose nodes are marked with class names of the metamodel and whose edges represent references. However, these annotations do not have to be the concrete object types (they even can be names of abstract classes). It is the task of the so-called checker, the second important difference!, to map these nodes to non-abstract classes and instantiate corresponding objects. The instance graph is therefore used as a constraint net that can be solved using the information given in the metamodel, e.g. references, inheritance relationships, cardinalities, etc. This is described in detail in the first paper mentioned in Further reading.

Further reading:

  • Generating meta-model-based freehand editors. Minas, Proc. of 3rd International Workshop on Graph Based Tools (GraBaTs'06), 2006.

  • Syntax analysis for diagram editors: A constraint satisfaction problem. Minas, In Proc. of the Working Conference on Advanced Visual Interfaces (AVI'2006), 2006.

  • Generating visual editors based on Fujaba/MOFLON and DiaMeta. Minas, Proc. Fujaba Days 2006

  • DiaMeta website

November 26, 2007

ATL and AMW

The next talk in our seminar was about ATL, the ATLAS transformation language. The talk took place on November 15, 2007, the same day we already discussed MOF QVT. The main difference between ATL and QVT is, that QVT is just a specification whereas ATL is both a language AND an implementation.



We just recall the essence of model transformation. Say, we want to define a model transformation between two metamodels MMa and MMb (think of them as source and target language). Such a transformation allows us to transform instances Ma of MMa into instances Mb of MMb. Usually the metamodels have to conform to a common metametamodel MMM. Further we usually have a transformation metamodel MMt, i.e. a language our transformation Mt can be defined in.



ATL is based on the Eclipse platform. Here it can, for instance, be used to define transformations between Ecore models. The picture on the left side shows how the picture above (both taken from the ATL Starter Guide) is instantiated by ATL.

We cannot discuss the differences of ATL and QVT here in detail (there is a paper addressing their alignment, see below), however, we can give some reasons why ATL also is important:


  • Model transformation is young. We do not have as many experiences in the field as we should have to define a standard already. Therefore a broader variety of tools and languages is useful.

  • ATL works. It is smoothly integrated into Eclipse, well-documented, has its own IDE (debugger, etc.) and an active community that gladly offers support via the newsgroup, etc. For QVT there are tools already, however, they are not that mature yet.

  • ATL is different. It provides interesting concepts not included in QVT. AMW (ATLAS Model Weaver), for instance, is the ATL way to describe model transformations on a more abstract level. Further ATL offers e.g. rule refinement (copy all elements that are not addressed by a rule) and a model handler abstraction layer, i.e. different model handlers can be used (e.g. for EMF, MDR, etc.).



Note, that a nice thing about model transformation is that you can even transform transformations. There is an ATL usecase QVT2ATLVM that transforms a QVT transformation to an ATL transformation.

Further reading:

  • ATL

  • Atlas Model Weaver (AMW)

  • On the Architectural Alignment of ATL and QVT, Jouault, F., and Kurtev, I., Proceedings of ACM Symposium on Applied Computing (SAC 06)

November 23, 2007

UML2 Certification Guide

This is a short review of the book "UML2 Certification Guide - Fundamental & Intermediate Exams" by Weilkiens and Oesterreich.

The purpose of the book is to prepare for the UML exams of the OMG, a way to certify different levels of knowledge about the UML.

This book is NOT for learning UML. It does not focus on UML diagrams, but on the abstract syntax of UML as defined in the specification. The book is structured in three parts: an introduction that provides information about the history of UML, the OMG and the exams, and two main parts introducing the examination relevant knowledge for the OCUP Fundamental exam and the OCUP Intermediate exam, respectively.

Good:

  • complete coverage of examination relevant parts of specification (at least the authors claim so)

  • example questions

  • price ok




Bad:

  • not self-contained: concepts crucial for understanding the diagrams like union and subsets are not introduced at all, the stucture of the specification is not discussed (infrastructure, superstructure, mof, etc.)

  • too many annoying little errors

  • didactically not state of the art



Conclusion: If the book really covers exactly the material of the exams - as the authors claim - it is valuable in this respect. Further on several example questions are provided that may also be helpful. However, on the OMG site the examination relevant parts of the specification are also given as colored tables of contents. And since the book is not that good from a methodical point of view people that are familiar with reading specifications may not really need the book. However, since the price is ok I do not regret having bought it. But reading the specification is still necessary.

Further reading:

MetaEdit+

MetaEdit+ is a commercial tool for both the design and use of (domain-specific) modeling languages (a meta-CASE-tool).

The underlying metamodeling language in MetaEdit+ is called GOPPRR and is very simple. There are just Properties and Non-Properties (which in turn can have Properties again). Non-Property-Types are Graph, Object, Port, Role and Relationship, and for each of these types there is a creation wizzard/form.

A feature that distinguishes MetaEdit+ from other tools is its symbol editor. You do not have to specify the concrete syntax by hand. Rather you just draw the component in a WYSIWYG-style which makes the object creation process very simple. In particular it is not necessary to define the mapping between abstract and concrete syntax explicitly like in GMF (to be discussed in a later post).

The relationships are specified by providing a so-called binding that tells the system which object types may be linked together and in which roles. The graph tool finally lets you draw the instances of your metamodel, your diagrams. As the name of this tool already suggests MetaEdit+ is best-suited for graph-like languages. Languages that make heavy use of spatial relations (like VEX, NSD and others) are difficult to create in MetaCASE. However, a lot of practically relevant languages have a graph structure.

Some advantages of MetaEdit+:

  • great support, I even got a personal web demonstration with special emphasis on my specific questions

  • easy to install and use

  • feature-rich, e.g. nice editor for components

  • lots of examples and documentation

  • highly integrated

  • Model2Code transformation possible



And some little disadvantages:

  • no explicit support for spatial relations

  • closed source, expensive (its a commercial tool after all, evaluation is free and they also provide an academic license, though)

  • no free-hand editing like e.g. in DiaGen/DiaMeta (to be described later)

  • version control not so good (but possible, some best practices are suggested)


Conclusion: This tool is definitely worth a try. It is well suited for the rapid development of CASE tools for all kinds of graph-like visual languages.

Further reading:

November 17, 2007

MOF QVT

This post is about MOF QVT (Query/View/Transformation), a model transformation language we also discussed in the course of our seminar (November 15, 2007). First of all lets briefly explain these three terms:

  • A query basically is an expression evaluated over a model. OCL is a prominent example of a query language that we will discuss in a future post.

  • A model completely derived from another model is called a view. Mostly views are not persistent and cannot be changed independent from their base model.

  • A program that generates a target model from a source model is called a transformation.


The focus of MOF QVT is clearly on transformations.

Transformations can be classified in several ways. We describe some providing examples at the same time:

  • horizontal vs. vertical: In contrast to horizontal transformations a vertical transformation changes the level of abstraction. Examples: class diagram to relational database schema (horizontal), model to code (vertical)

  • unidirectional vs. bidirectional: Direction of the transformation and the way how changes can be propagated

  • input/output cardinalities: number of input and output models



Similar to programming languages different paradigms are possible to describe a model transformation: operational (the sequence of steps necessary to produce the result) and declarative (only relationships between model elements are described). QVT follows a hybrid approach, i.e. both paradigms are supported.

The abstract syntax of QVT itself is defined using MOF. However, in the specification a textual as well as a visual notation for the definition of a transformation are given.

We cannot introduce these languages here in detail, however, we provide a short teaser (taken directly from the specification, chapter 7):

relation UML2Rel {
  checkonly domain uml1
    c:Class {name = n, attribute = a:Attribute{name = an}}
  checkonly domain r1
    t:Table {name = n, column = col:Column{name = an}}
}


This is an excerpt from the classical example, the transformation of a class diagram to a relational database schema, a relational description of the mapping between a class and a table. Visually this can be expressed as shown in the image on the left side.

QVT is a very new language so tool support is quite limited up to now. The operational part is implemented e.g. in the tool SmartQVT. In the next post I will shortly introduce ATL, the ATLAS transformation language, that is not a QVT implementation but that provides a very mature and widely-used model transformation language on its own.

At the moment there is a lot of discussion on how to bring together graph transformation (see, e.g., my introductory post about Graph Transformation) and QVT. I will discuss this together with the introduction of triple graph grammars in a later post.

Further reading:

November 15, 2007

EMF and MOF

On November 8, 2007 the student presentation about the metamodeling frameworks EMF and MOF took place. Note, that on this blog there will be many more posts about EMF and MOF (in fact, theses metamodeling frameworks are an essential part of my research), so that this initial post is just to give you a start.



Metalayers up to MOF


So, why do we need metamodels? In computer science we usually describe objects of the real world in terms of models, e.g. by providing a UML class diagram. Advantages of this approach are, amongst others, that we gain more understanding of the particular application domain. A modeling language itself can be very complex as well. For instance, the UML specification consists of hundreds of pages. For such a modeling language it is even more important to be defined as formal and precise as possible, so it is quite self-evident to apply the modeling approach to the modeling language itself. Dealing with a modeling language itself is called metamodeling to avoid confusion. The classical picture on the left from the UML Infrastructure specification (chapter 7) provides a good understanding of this issue. You see that the components of a class diagram can be modeled by (you probably already guess it) a class diagram again. For instance, we could define, that a class may contain several attributes and so on (a composition association between the metaclasses Class and Attribute).

So the language UML is defined on level M2. However, we are not ready yet. We still have not specified how a language like UML can be defined. We raise the level of abstraction once more and get MOF, a language to describe languages (calm down, you do not have to learn yet another language from scratch, because MOF makes heavy use of the well-known class diagrams). Of course we could continue this procedure ad infinitum, but here are the good news: MOF can be described just using itself. So we are done (at least at the moment ;-)).

EMF


Why do we need yet another modeling framework? Is MOF not enough to describe arbitrary languages? Yes, in principle it is. But it has two disadvantages:

  • The MOF language is very powerful. Yes, indeed this can be a disadvantage, because you have to understand the concepts to use it. And the standard is very big.

  • The MOF language lacks tool support. Currently there is no widely used and mature tool for MOF. This may change in the future. The research tool MOFLON is quite powerful already and may fill this gap.



In contrast, EMF is widely used. It is integrated in the popular Eclipse platform, provides a powerful code generator and is easy to use. Its Metametamodel is called Ecore. You can think of it as the practically relevant subset of MOF (some quite useful features of MOF, however, are lost).

To conclude, the main advantages of Metamodeling are:

  • Precise definition of languages

  • Helpful for developing CASE tools

  • Use of Reflection possible (exploiting information about the model at runtime)



Further reading:

  • OMG's MOF

  • MOF Specification

  • MOFLON, a MOF diagram editor and meta-CASE tool

  • EMF Website

  • Eclipse Modeling Framework, Budinsky, Steinberg, Merks, Ellersick, Grose, 2003: standard book, however, do not buy it now but wait for the new version that will appear very soon

DiaGen

The next student presentation in our seminar was about DiaGen, the diagram editor generator developed at our department. It was also held on October 25, 2007.

The main advantages of DiaGen editors are (from my point of view):

  • combination of free-hand editing and syntax-directed editing

  • on-line syntax checks of diagrams during the editing process, partially correct diagrams as intermediate steps are perfectly normal






DiaGen provides an editor framework as well as a so-called Designer for the specification of diagram editors. The editor developer specifies the editor (language), generates editor-specific code and together with the DiaGen editor framework a fully-fledged diagram editor can be launched immediately.



The editing process is not half as difficult as it may look at a first sight :-), so read on. The editor user draws components freely on the screen. In a first step the so-called modeler maps the diagram to its hypergraph representation as already mentioned in the last post. This hypergraph model already incorporates spatial relations as edges, e.g. to indicate, that a particular component is contained in another one. The next step is the reducer. In order to understand the reducer an analogy from string parsing is handy. Think of it as a lexer that provides a more abstract representation of the hypergraph model. The last important step is parsing the reduced hypergraph model and giving the user visual feedback about correct parts and errors. Parsing is done according to a hypergraph grammar as explained in the last post about Hyperedge Replacement Graph Grammars.




For motivational purposes here you can see an image of a DiaGen editor for flowcharts.




Further reading:

  • DiaGen Website

  • Concepts and realization of a diagram editor generator based on hypergraph transformation, Mark Minas, 2002, Science of Computer Programming, 44(2):157-180

VL Conferences

You may be interested in using my public google calendar for conferences about visual languages:


I will assume no liability for the correctness of all dates. Its just a service. If you think an important date is missing or wrong please drop me a note.

Hyperedge Replacement Graph Grammars

The second talk was held on October 25, 2007. It was about Hyperedge Replacement Graph Grammars HRG. I know, I know, ... In the last post I promised to focus more on practical things. But: Hyperedge replacement is more practical than you might think. For instance, hypergraphs can be used as a model for visual languages as realized in the tool DiaGen. Following this approach a visual language can be defined via a HRG and syntax analysis of diagrams can be reduced to hypergraph parsing.

The main advantages of HRG are:

  • easy to grasp

  • very close analogy to context-free string grammars

  • straightforward syntax analysis possible



I cannot introduce the whole theory in a single blog post, so I give you an impression by showing some pictures.



This is a flowchart. It consists of statements (rectangles) and conditions (diamonds) that are linked with arrows indicating the flow of control. The presented algorithm is the so-called Hasse-Collatz problem but this is not so interesting here. More interesting is the fact, that due to their graph-like structure flowcharts can be described by a HRG quite straightforwardly.



However, before I can show you the HRG I first have to give a possible hypergraph model of the example flowchart. Every flowchart component is mapped to a hyperedge represented by a rectangle. In contrast to directed edges a hyperedge may connect an arbitrary number of nodes so that a representation by an arrow is not possible in general. The nodes are given by filled black circles. Nodes visited by a particular hyperedge are connected to this edge by a line called a tentacle of the edge. Each tentacle has a particular role, for instance, it has to be clear which tentacle of a cond edge represents the true and which one the false case. Hyperedges with the same label have to visit the same number of nodes called the type of the edge.



Now we can discuss the HRG of structured flowgraphs (the graphs behind a correct flowchart). A HRG is defined by a set of nonterminal labels N, a set of terminal labels T, a set of productions P and a starting label S that has to be an element of N. The productions are given in the picture in a notation similar to BNF. They are recursively defined in a quite intuitive manner. Note, that a HRG production always has a single nonterminal edge on the lhs. As a consequence HRGs have nice properties resulting from this kind of contextfreeness. However, it is also a restriction, because important graph languages cannot be described that way.

HRG is a kind of DPO (as introduced in the last post about the algebraic approach). The gluing graph consists just of the nodes that occur in both the lhs and the rhs of a production.



Due to the contextfreeness of HRG it is possible to construct derivation trees. Such a tree can be represented as shown in the picture (it is the tree for our example). All leafs are marked with terminal symbols and nodes with nonterminal symbols. The number of subtrees of a node depends on the number of hyperedges on the rhs of the applied production. The numbers in parentheses are the corresponding node numbers of the graph model.

Further reading:

  • Handbook of Graph Grammars and Computing by Graph Transformation, Rozenberg (ed.), 1997, World Scientific Publishing: chapter 2

Algebraic approach to graph transformation

Let's start with the first talk of the seminar held on October 18, 2007 about the algebraic approach to graph transformation that can be subdivided in the single and the double pushout approach. These terms sound quite daunting, but the underlying concept is not hard to grasp. Both approaches are based on the concepts graph, production and graph grammar, that I will not introduce here in detail. Just note, that the concepts production and grammar can be understood quite closely to the analog notions in string rewriting.



In the double pushout approach DPO a production consists of three graphs. The left-hand side (lhs) graph L is the graph to be matched. The interface graph K is the so-called gluing graph, i.e. the subgraph of L that is not deleted by the production. And the right-hand side (rhs) graph R is the graph L should be replaced with. Between K and the graphs L and R morphisms l and r have to be given mainly because corresponding nodes and edges have to be identified. If this production should be applied to a host graph G a double pushout has to be constructed. Therefore we need to find a match of L in G (also a morphism, m). Next all parts of G have to be deleted that are not in L (of course we mean the images regarding the particular morphism). And finally we have to embed into the resulting graph D all parts of R that are not in L leading to the target graph H. A production can only be applied if the morphism diagram commutes and the universal pushout property holds twice -- this sounds complicated but it conforms to the intuitive understanding of graph transformation. The DPO is very nice from a mathematical point of view, however, it is not always applicable.



The single pushout approach SPO does not need a gluing graph K. A production just consists of a lhs graph L and a rhs graph R with a partial morphism inbetween. As in DPO a match of L in G has to be found. This match gets replaced by R. Nodes and edges not occurring in the rhs anymore are deleted rigorously.

The main difference between single and double pushout is the so-called dangling edge condition. Following the double pushout approach a node must not be removed if there are still edges to this node. In the single pushout approach this restriction is canceled: all dangling edges are just deleted. This deletion of edges makes the SPO quite difficult from a theoretical point of view.

Ok, so far for now. Sorry for starting with theory -- I promise that practical topics will be the focus of this blog in the future :-)

Further reading:

  • Handbook of Graph Grammars and Computing by Graph Transformation, Rozenberg (ed.), 1997, World Scientific Publishing: chapters 3 and 4

  • Graph Transformation in a Nutshell, Reiko Heckel, Electronic Notes in Theoretical Computer Science, Volume 148, Issue 1, Proceedings of the School of SegraVis Research Training Network on Foundations of Visual Modelling Techniques (FoVMT 2004), 2006, Pages 187-198.

Course on Graph and Model transformation

My name is Steffen Mazanek and I am a scientific assistant at the department of computer science of the Universit├Ąt der Bundeswehr, Munich. Here I work on my PhD thesis in the field of visual languages. My supervisor is Prof. Mark Minas.

I am also involved in teaching. For instance, this semester we conduct a course on graph and model transformation. I think this is a good starting point for this new blog on visual languages. So my first posts will be about the topics of the students presentations. In the next semester we plan to conduct a practical to give the students hands-on knowledge about common tools in this domain. This is also going to be interesting.

Further on I will present and discuss interesting research papers from this domain here in brief. So, stay tuned.