Some time ago I had already written about a nice visualization of lambda calculus called VEX (visual expressions). In VEX the variable bindings are made explicit by using connecting lines. In contrast, in conventional lambda calculus bindings are implicitly given by the names of the variables. From my experience this is difficult to understand for students, in particular if alpha conversion comes into play. Today I have seen an even nicer visualization for the representation and execution of lambda expressions called Alligator Eggs!.
For motivation, here is an example expression from the Alligator Eggs! website:
Very briefly: A hungry alligator is an abstraction, an old (=uncolored and non-hungry) alligator can be used for bracketing, and eggs represent variables. There is an eating rule, which corresponds to beta-reduction, and a color rule for over-cautious alpha-conversion. Finally, for clean-up, there is an old age rule, which says that if a pair of parentheses contains a single term, the parentheses can be removed.
I really look forward to empirical results whether this approach helps in teaching lambda calculus.
December 10, 2008
November 18, 2008
Graph Transformation Day Bremen
Directly after the World Usability Day in Dresden I attended the Graph Transformation Day in Bremen. Here, I gave a talk about the generation of correctness-preserving editing operations for diagram editors. More on this later...
The other talks have been given by my supervisor Prof. Mark Minas (about using triple graph grammars for analysis in diagram editors), Dr. Rubino Geiß, the architect of the GrGen graph transformation engine (and indeed his talk was about the implementation and application of GrGen), and finally Edgar Jakumeit, who described the realization of recursive matching rules for GrGen. I have found these recursive "star" rules particularly interesting, because they actually allow to write parsers with GrGen in a declarative style. In a sense, this is quite similar to my approach to graph parsing via combinators.
It was a nice workshop. I learned a lot about the GrGen system. In particular I now have an idea how they managed to build the fastest (at least in quite some cases) graph transformation tool in the wild. Thank you Berthold for organizing this...
The other talks have been given by my supervisor Prof. Mark Minas (about using triple graph grammars for analysis in diagram editors), Dr. Rubino Geiß, the architect of the GrGen graph transformation engine (and indeed his talk was about the implementation and application of GrGen), and finally Edgar Jakumeit, who described the realization of recursive matching rules for GrGen. I have found these recursive "star" rules particularly interesting, because they actually allow to write parsers with GrGen in a declarative style. In a sense, this is quite similar to my approach to graph parsing via combinators.
It was a nice workshop. I learned a lot about the GrGen system. In particular I now have an idea how they managed to build the fastest (at least in quite some cases) graph transformation tool in the wild. Thank you Berthold for organizing this...
World Usability Day Dresden
Last Thursday I gave a talk at the World Usability Day Dresden with the title "Nutzerunterstützung in Diagramm-Editoren zum Erlernen visueller Sprachen" (in English: User assistance for diagram editors supports the learning of visual languages). I provide an abstract of my talk (in German) on my website. I got quite encouraging feedback and some nice tips on how to actually conduct a user study.
All in all, I really liked this workshop. It was well organized by Jan-Henning Raff from the media center of TU Dresden. There have been round about 50 participants listening to the ten talks. Although I am not a usability expert (I just start exploring this domain), I learned a lot from most of the talks. In particular I enjoyed the talks about tools:
The day was completed by a "Usa-beer" evening in a nice bar of Dresden.
All in all, I really liked this workshop. It was well organized by Jan-Henning Raff from the media center of TU Dresden. There have been round about 50 participants listening to the ten talks. Although I am not a usability expert (I just start exploring this domain), I learned a lot from most of the talks. In particular I enjoyed the talks about tools:
- Jürgen Steimle introduced the CoScribe-System, an impressive pen-and-paper-interface for collaboration based on the Anoto approach.
- Stefan Meißner from seto company talked about Enterprise 2.0 and introduced a nice WYSIWYG editor for websites.
- Severin Taranko from queo media demonstrated a WordPress plugin that gives visitors of your blog an overview over the most relevant and popular articles of your blog (using some reasonable criteria).
- Robert Jung motivated the use of sketch tools for CAD. In particular he showed a nice video of a tool developed at the university of Toronto: ILoveSketch. There is a nice video on the tool's website.
- Prof. Gerhard Weber and one of his students talked about accessibility in (web) applications. They have improved the accessibility of the Moodle learning platform for blind people. Furthermore, they have studied how blind and non-blind people collaborate in the creation of UML class diagrams.
The day was completed by a "Usa-beer" evening in a nice bar of Dresden.
September 25, 2008
Roughly sketch your diagrams
In recent posts I have already demonstrated the power of diagram completion.
In the paper I have presented at this year's LED workshop, I have shown how diagram completion can be used in order to facilitate a completely novel(?) approach to diagram editing. It is enough to just roughly sketch your diagram. You do not need to hit the mark quite precisely anymore as required in traditional diagram editors. Rather the completion engine computes all ways, how the diagram can be put together. Thereafter, the completion causing minimal change to the layout of existing diagram components is selected automatically! I am quite sure that this feature can boost the accessibility of diagram editors.
Example:
This figure shows how tedious the manual correction of a roughly sketched diagram can be. At least three complex mouse dragging operations are necessary here. The completion engine would yield two results for this example, the diagrams A and B given in the upper row. However, it is quite clear which diagram the editor user would prefer: the one with minimal changes to his components, namely A. A shortcut that automatically applies completion A could greatly improve editing performance.
However: This approach is only possible as long as there are not too many possible compositions of the fragements. For instance, n disconnected NSD statements can be put together in n! ways. As a consequence, the user would have to invoke the completion engine every few steps. Even so, this kind of assistance is still useful.
In the paper I have presented at this year's LED workshop, I have shown how diagram completion can be used in order to facilitate a completely novel(?) approach to diagram editing. It is enough to just roughly sketch your diagram. You do not need to hit the mark quite precisely anymore as required in traditional diagram editors. Rather the completion engine computes all ways, how the diagram can be put together. Thereafter, the completion causing minimal change to the layout of existing diagram components is selected automatically! I am quite sure that this feature can boost the accessibility of diagram editors.
Example:
This figure shows how tedious the manual correction of a roughly sketched diagram can be. At least three complex mouse dragging operations are necessary here. The completion engine would yield two results for this example, the diagrams A and B given in the upper row. However, it is quite clear which diagram the editor user would prefer: the one with minimal changes to his components, namely A. A shortcut that automatically applies completion A could greatly improve editing performance.
However: This approach is only possible as long as there are not too many possible compositions of the fragements. For instance, n disconnected NSD statements can be put together in n! ways. As a consequence, the user would have to invoke the completion engine every few steps. Even so, this kind of assistance is still useful.
- S. Mazanek, S. Maier, M. Minas. Exploiting the Layout Engine to Assess Diagram Completions. Proc. of the 2nd International Workshop on Layout of (Software) Engineering Diagrams (LED 2008), 2008. Electronic Communications of the EASST.
September 16, 2008
Visual Week up and running
The Visual Week in Herrsching, Germany is already up and running. Yesterday the Workshops Layout of (Software) Engineering Diagrams and Sketch tools for diagramming took place. I attended LED where I gave a talk with the title "Exploiting the Layout Engine to Assess Diagram Completions" (paper and slides). But I also really enjoyed the other talks that covered a wide range of topics from industry projects to sophisticated graph layout approaches. A highlight was the 3D UML Heuristic Challenge. We have been divided in several groups and had to solve given tasks. My group had to think about the differences in 2D and 3D usability heuristics and, subsequently, evaluate a given tool using the heuristics we just developed.
Today the ACM SoftVis conference has started and a graduate consortium took place (many exciting new ideas!).
Today the ACM SoftVis conference has started and a graduate consortium took place (many exciting new ideas!).
July 19, 2008
Usability
In this post I briefly discuss usability. First of all, I have to say that there is so much research that it is nearly impossible to write a short blog post about this topic. For instance, Jakob Nielsen, a popular usability expert, writes about usability for years. So I will just give you a quick impression by introducing the GOMS approach, which I found very straightforward and easy to apply. I especially use it to evaluate the task performance of my approach to diagram completion.
GOMS is an acronym for Goals, Operators, Methods, and Selection rules. The approach, however, is much easier than these terms suggest. The key idea is, that you write down all operations that have to be performed in order to achieve a given goal. Depending on its kind a predetermined execution time is attached to each of these operations. For instance, a keystroke takes 0.28 seconds for an average user, pointing with the mouse to an object on the screen lasts round about 1.1 seconds. There are several more kinds of actions and appropriate average times (determined on an empirical base). Summing up the times for all necessary operations finally yields the task execution time. Once you have this time you can compare different user interfaces, you can evaluate changes to the user interface and so on. It's that easy.
Note that in the context of the Visual Week (15.-21.09.08 in Herrsching am Ammersee, Germany) there will be a tutorial on another popular usability approach called Cognitive Dimensions. Here, usability is evaluated on a much higher level. To quote from the tutorial website (accessed 19 July 2008):
The Cognitive Dimensions of Notations (CDs) framework is the world's leading approach to understanding the usability of programming tools. It provides an analytic framework and design vocabulary that can be used to evaluate and improve, not only programming languages, but a wide variety of environments and notations for design, problem-solving, and creative work.
Currently I apply usability evaluation methods to my approach on diagram completion. In my talk "Exploiting the Layout Engine to Assess Diagram Completions" to be held at the layout workshop LED 2008 (satellite event of VL/HCC 2008) I will present a first improvement: in certain situations a special shortcut can more than double the task performance (to speak in terms of GOMS). Stay tuned!
Further reading:
GOMS is an acronym for Goals, Operators, Methods, and Selection rules. The approach, however, is much easier than these terms suggest. The key idea is, that you write down all operations that have to be performed in order to achieve a given goal. Depending on its kind a predetermined execution time is attached to each of these operations. For instance, a keystroke takes 0.28 seconds for an average user, pointing with the mouse to an object on the screen lasts round about 1.1 seconds. There are several more kinds of actions and appropriate average times (determined on an empirical base). Summing up the times for all necessary operations finally yields the task execution time. Once you have this time you can compare different user interfaces, you can evaluate changes to the user interface and so on. It's that easy.
Note that in the context of the Visual Week (15.-21.09.08 in Herrsching am Ammersee, Germany) there will be a tutorial on another popular usability approach called Cognitive Dimensions. Here, usability is evaluated on a much higher level. To quote from the tutorial website (accessed 19 July 2008):
The Cognitive Dimensions of Notations (CDs) framework is the world's leading approach to understanding the usability of programming tools. It provides an analytic framework and design vocabulary that can be used to evaluate and improve, not only programming languages, but a wide variety of environments and notations for design, problem-solving, and creative work.
Currently I apply usability evaluation methods to my approach on diagram completion. In my talk "Exploiting the Layout Engine to Assess Diagram Completions" to be held at the layout workshop LED 2008 (satellite event of VL/HCC 2008) I will present a first improvement: in certain situations a special shortcut can more than double the task performance (to speak in terms of GOMS). Stay tuned!
Further reading:
- Stuart Card, Thomas P. Moran and Allen Newell: The Psychology of Human Computer Interaction. 1983.
- GOMS Model Work, comprehensive website about GOMS maintained by David E. Kieras.
July 18, 2008
RTA 2008
I am just back from the RTA 2008 (Rewriting Techniques and Applications) conference in Linz. My talk has been on Wednesday and was about Functional-logic Graph Parser Combinators. I already have written on this blog about graph parser combinators, but this purely functional approach had several problems we solved using functional-logic programming techniques. In particular, the new framework can be used as a back-end for my approach to diagram completion.
Hagenberg, the conference location, is a nice little village 20km away from Linz. It has a romantic, medieval castle in which the conference took place. Furthermore, an important branch of the FH Oberösterreich is located here.
From the talks I attended I especially liked the invited one by Thomas Hillenbrand about the theorem prover Waldmeister which is known to be very fast and powerful. Recently, the popular computer algebra system Mathematica even incorporated the Waldmeister approach to equational theorem proving into its tool suite.
Although I do not know much about automatic theorem proving there have been some impressive facts I remember from the talk. First of all, it is still very important to think about the most efficient data structures for a particular application. In theorem proving execution time and memory usage are really a big issue, so every little improvement of the data structures might make a decisive difference. And second, there is no all-in-one algorithm suitable for every application domain. Thus, it is important to provide a flexible framework that can be parameterized by the users. In Waldmeister one has to choose among several weighting functions from which each one is superior for some special domains. Finally, automatic theorem proving has advanced a lot in the last years. Even for some "real proofs" there is a good chance that they can be found without much manual investigation.
Hagenberg, the conference location, is a nice little village 20km away from Linz. It has a romantic, medieval castle in which the conference took place. Furthermore, an important branch of the FH Oberösterreich is located here.
From the talks I attended I especially liked the invited one by Thomas Hillenbrand about the theorem prover Waldmeister which is known to be very fast and powerful. Recently, the popular computer algebra system Mathematica even incorporated the Waldmeister approach to equational theorem proving into its tool suite.
Although I do not know much about automatic theorem proving there have been some impressive facts I remember from the talk. First of all, it is still very important to think about the most efficient data structures for a particular application. In theorem proving execution time and memory usage are really a big issue, so every little improvement of the data structures might make a decisive difference. And second, there is no all-in-one algorithm suitable for every application domain. Thus, it is important to provide a flexible framework that can be parameterized by the users. In Waldmeister one has to choose among several weighting functions from which each one is superior for some special domains. Finally, automatic theorem proving has advanced a lot in the last years. Even for some "real proofs" there is a good chance that they can be found without much manual investigation.
July 14, 2008
Logic gates
In previous posts I already have discussed several visual languages like VEX, Trees or ER diagrams. These languages have in common that all of them are used by computer scientists. Of course, visual languages also exist in other domains (in fact, visual notations are much older than computers).
In this post I show an editor for a language mainly used by electrical engineers: Logic gates. This is a visual notation for logic expressions that can be used in the domain of circuit design. Expression languages normally are context-free (although often some kind of sharing is used that cannot be described in a context-free way).
Again my approach to diagram completion is applicable, so we get a powerful editor for logical expressions. Here is a screencast demonstrating its use:
I also provide this screencast at an extra site, if this one is two small. You can also download this logic editor as an executable jar (however, keep in mind that this is still a research prototype).
Finally, let me admit that in the last time the most posts have been related to my own approach. I hope you forgive me this way too restricted focus. I will soon attend ICGT and VL/HCC conferences and the LED workshop, where I surely get inspiration for other interesting topics. You might also consider suggesting topics you are interested in for discussion. Of course, if you want to write something about a fancy tool, I gladly publish your description here as a guest contribution.
In this post I show an editor for a language mainly used by electrical engineers: Logic gates. This is a visual notation for logic expressions that can be used in the domain of circuit design. Expression languages normally are context-free (although often some kind of sharing is used that cannot be described in a context-free way).
Again my approach to diagram completion is applicable, so we get a powerful editor for logical expressions. Here is a screencast demonstrating its use:
I also provide this screencast at an extra site, if this one is two small. You can also download this logic editor as an executable jar (however, keep in mind that this is still a research prototype).
Finally, let me admit that in the last time the most posts have been related to my own approach. I hope you forgive me this way too restricted focus. I will soon attend ICGT and VL/HCC conferences and the LED workshop, where I surely get inspiration for other interesting topics. You might also consider suggesting topics you are interested in for discussion. Of course, if you want to write something about a fancy tool, I gladly publish your description here as a guest contribution.
July 12, 2008
Course on Visual Languages
I already have written several posts about our previous course on graph and model transformation conducted in autumn last year and the practical afterwards. Next autumn we will hold another course, this time directly on visual languages. We will discuss approaches to language definition like extended positional grammars, constraint multiset grammars, hyperedge replacement grammars, and, of course, metamodels. I look forward to report on these approaches here soon.
Similar courses and practicals have been conducted at several universities in the past (e.g. at Universität Bremen (Berthold Hoffmann), CAU Kiel (Hauke Fuhrmann), RWTH Aachen (Manfred Nagl), Philipps-Universität Marburg (Gabriele Taentzer)). I would appreciate very much any comments on the lessons learned while conducting these courses.
Similar courses and practicals have been conducted at several universities in the past (e.g. at Universität Bremen (Berthold Hoffmann), CAU Kiel (Hauke Fuhrmann), RWTH Aachen (Manfred Nagl), Philipps-Universität Marburg (Gabriele Taentzer)). I would appreciate very much any comments on the lessons learned while conducting these courses.
July 07, 2008
Trees
Trees are a central data structure in computer science. A lot of visual languages have a tree-like structure, for instance class hierarchies with single inheritance, family trees or organizational charts. In contrast to the language of all graphs the sublanguage of trees can be described in a context-free way with a hyperedge replacement grammar:
However, this straightforward definition is problematic: A given tree might have a lot of derivations, i.e., the grammar is highly ambiguous. It is simply not clear in which order sibling nodes have to be derived. Therefore, it might be a better idea to encode this order in the grammar explicitly:
An example tree then is represented as follows:
Since my approach to diagram completion is applicable to all visual languages that can be defined with hyperedge replacement grammars, a powerful tree editor with completion can be realized easily. Here is a screencast demonstrating its use:
I also provide this screencast at an extra site, if this one is two small. You can also download this tree editor as an executable jar (however, keep in mind that this is still a research prototype).
However, this straightforward definition is problematic: A given tree might have a lot of derivations, i.e., the grammar is highly ambiguous. It is simply not clear in which order sibling nodes have to be derived. Therefore, it might be a better idea to encode this order in the grammar explicitly:
An example tree then is represented as follows:
Since my approach to diagram completion is applicable to all visual languages that can be defined with hyperedge replacement grammars, a powerful tree editor with completion can be realized easily. Here is a screencast demonstrating its use:
I also provide this screencast at an extra site, if this one is two small. You can also download this tree editor as an executable jar (however, keep in mind that this is still a research prototype).
July 05, 2008
Constraint Logic Programming and Diagram Completion
I got stimulating feedback after my last post on diagram completion. For instance, I was not aware of the SmartEMF project of Anders Hessellund et al. They aim at extending EMF with an inference engine to facilitate reasoning about models. Interestingly, they also get some guidance that way. A similar approach has been realized in the AToM3 tool by Sagar Sen et al. They also employ constraint logic programming to infer model completions.
The diagram completion presented in my last post has been incorporated into the DiaGen tool which is based on hyperedge replacement grammars. However, metamodels are more popular nowadays for language specification. I already introduced the DiaMeta tool on this blog. It is based on DiaGen, but allows the generation of free-hand editors out of a metamodel-based specification. Syntax analysis then can be mapped to the solution of a constraint satisfaction problem. It would be interesting to see if the approaches of Hessellund and Sen could be used to get similar diagram completion for DiaMeta.
Dear readers, please continue to comment on my posts or give feedback by mail. I appreciate this very much. If your own project is related somehow to visual languages, do not hesitate and send me a short summary. I gladly post and/or discuss it here on this blog.
Further reading:
The diagram completion presented in my last post has been incorporated into the DiaGen tool which is based on hyperedge replacement grammars. However, metamodels are more popular nowadays for language specification. I already introduced the DiaMeta tool on this blog. It is based on DiaGen, but allows the generation of free-hand editors out of a metamodel-based specification. Syntax analysis then can be mapped to the solution of a constraint satisfaction problem. It would be interesting to see if the approaches of Hessellund and Sen could be used to get similar diagram completion for DiaMeta.
Dear readers, please continue to comment on my posts or give feedback by mail. I appreciate this very much. If your own project is related somehow to visual languages, do not hesitate and send me a short summary. I gladly post and/or discuss it here on this blog.
Further reading:
- Anders Hessellund, Krzysztof Czarnecki, Andrzej Wasowski: Guided Development with Multiple Domain-Specific Languages. In Proc. of MODELS'07, 2007
- Sagar Sen, Benoit Baudry, and Hans Vangheluwe: Domain-specific model editors with model completion. In Multi-paradigm Modelling Workshop at MoDELS'07, 2007.
- Mark Minas: Syntax analysis for diagram editors: A constraint satisfaction problem. In Proc. of the Working Conference on Advanced Visual Interfaces (AVI'2006), 2006.
July 01, 2008
Diagram Completion
Recently I have been working on an approach to diagram completion. This kind of assistance helps users in correcting and extending diagrams. That way they can learn new languages much more easily. Furthermore, the construction of complex diagrams is also greatly simplified.
Following my approach, completions are computed on the abstract syntax level with respect to a hyperedge replacement grammar. Thereafter, they are embedded into the diagram by the layout engine. The details are provided in the papers given below.
For motivation I provide a screencast that shows how diagram completion actually has been realized in the DiaGen system. I also provide this screencast at an extra site, if this one is two small:
I also provide this editor for Nassi-Shneiderman diagrams as an executable jar.
If you are interested you might want to attend VL/HCC 2008 where I will give a talk about the overall approach. (If you are not interested in this you should join the conference anyhow. Early registration is still possible :-))
Further reading:
Following my approach, completions are computed on the abstract syntax level with respect to a hyperedge replacement grammar. Thereafter, they are embedded into the diagram by the layout engine. The details are provided in the papers given below.
For motivation I provide a screencast that shows how diagram completion actually has been realized in the DiaGen system. I also provide this screencast at an extra site, if this one is two small:
I also provide this editor for Nassi-Shneiderman diagrams as an executable jar.
If you are interested you might want to attend VL/HCC 2008 where I will give a talk about the overall approach. (If you are not interested in this you should join the conference anyhow. Early registration is still possible :-))
Further reading:
- S. Mazanek, S. Maier, M. Minas. Auto-completion for Diagram Editors based on Graph Grammars. Appears in Proc. of the 2008 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2008), 2008. IEEE Computer Society Press.
- S. Mazanek, S. Maier, M. Minas. An Algorithm for Hypergraph Completion according to Hyperedge Replacement Grammars. Appears in Proc. of the 4th International Conference on Graph Transformation (ICGT 2008), 2008. LNCS.
April 27, 2008
Sierpinski Triangles
I have already introduced my approach to graph parsing via Graph Parser Combinators on this blog recently. This time I provide a nice sample parser; at RTA I will present the concepts and how the used framework can actually be realized in the functional-logic programming language Curry.
The example I am going to discuss are Sierpinski triangles. This graph language is particularly interesting for several reasons. First of all, Sierpinski triangles are beautiful thanks to their self-similarity. Second, they can be described by a hyperedge replacement grammar as shown in the image below. And finally, depending on the generation, Sierpinski triangles grow exponentially in size making them an ideal candidate for tool benchmarks (see Taentzer et al. '08).
Note that the given grammar does not ensure that only regular Sierpinski triangles are generated. For instance, the graph shown below is also a member of its language. Regularity can be enforced by using a parallel replacement mechanism or control structures, though (see Habel, Kreowski '87).
Let's see how we can map this grammar to a parser using our framework:
The actual mapping is described in the RTA paper, but you will probably have no difficulties figuring out how it works. Note that inner nodes of a production are guessed using free variables. The regularity is ensured by a parameter depth. As a parser these functions are not very interesting, since parsing can't be done efficiently. However, due to the logic nature of the framework, it can be directly used for the generation of Sierpinski triangles. Compared to other tools (see Taentzer et al. '08) this generation even is reasonably efficient. For instance, we have generated a Sierpinski triangle of generation 11 with nearly 200.000 edges in about a minute.
Further reading:
The example I am going to discuss are Sierpinski triangles. This graph language is particularly interesting for several reasons. First of all, Sierpinski triangles are beautiful thanks to their self-similarity. Second, they can be described by a hyperedge replacement grammar as shown in the image below. And finally, depending on the generation, Sierpinski triangles grow exponentially in size making them an ideal candidate for tool benchmarks (see Taentzer et al. '08).
Note that the given grammar does not ensure that only regular Sierpinski triangles are generated. For instance, the graph shown below is also a member of its language. Regularity can be enforced by using a parallel replacement mechanism or control structures, though (see Habel, Kreowski '87).
Let's see how we can map this grammar to a parser using our framework:
s (Succ depth) (n1,n2,n3) = s depth (n1,n4,n5) *>
s depth (n4,n2,n6) *>
s depth (n5,n6,n3)
where n4,n5,n6 free
s One (n1,n2,n3) = edge ("line",[n1,n2]) *>
edge ("line",[n2,n3]) *>
edge ("line",[n3,n1])
sierpinski = s depth (n1,n2,n3)
where depth,n1,n2,n3 free
The actual mapping is described in the RTA paper, but you will probably have no difficulties figuring out how it works. Note that inner nodes of a production are guessed using free variables. The regularity is ensured by a parameter depth. As a parser these functions are not very interesting, since parsing can't be done efficiently. However, due to the logic nature of the framework, it can be directly used for the generation of Sierpinski triangles. Compared to other tools (see Taentzer et al. '08) this generation even is reasonably efficient. For instance, we have generated a Sierpinski triangle of generation 11 with nearly 200.000 edges in about a minute.
Further reading:
- Sierpinski Triangle for the AGTIVE 2007 Tool Contest
- G. Taentzer et al., Generation of Sierpinski triangles: A case study for graph transformation tools. In: Applications of Graph Transformations with Industrial Relevance AGTIVE ’07 (eds.: A. Schürr, M. Nagl, A. Zündorf), International Workshop Kassel, Oct. 10-12, 2007, LNCS, Springer, 2008
- A. Habel, H.-J. Kreowski, Pretty Patterns Produced by Hyperedge Replacement, Proceedings of the International Workshop on Graph-Theoretic Concepts in Computer Science, LNCS, Springer, 1987
- S. Mazanek, M. Minas, Functional-logic Graph Parser Combinators. To appear in Proceedings of 19th International Conference on Rewriting Techniques and Applications (RTA 2008), LNCS, Springer, 2008
April 22, 2008
Fujaba and Story Driven Modeling
Let's continue talk about our practical on graph and model transformation. In fact, it is already finished, but unfortunately I did not managed to write about the other stages in time. Remember, that in the first stage a UML class diagram editor had to be developed by the student teams using different meta-CASE-tools. The second stage, in contrast, was focused on transformations. The students had to deal with the simulation of finite state machines and pushdown automata using different research tools, namely AToM3, Tiger and Fujaba. At the end, all teams presented quite appealing solutions.
Fortunately, a student of the Fujaba-group, Thore Backen, has been willing to provide us a summary of his insights about the practical work with Fujaba:
In the second stage of our practical on graph and model transformation our group of three students had the objective to produce the simulation of deterministic as well as nondeterministic finite automata using Fujaba.
At first we needed quite some time (approx. 15 h) of practice to get used to Fujaba. To be honest we had some trouble deciding which version to use for our task. In the end we chose the Fujaba Tool Suite Version 4.3.2. The main reason why we chose this version over the more up-to-date version 5.0.4 was the easy-to-use already integrated Dynamic Object Browsing System (DOBS) in the older version. Since we wanted to use DOBS to simulate our finite automata, the outsourced eDOBS (in Fujaba 5) needing Eclipse did not offer any obvious advantages.
As usual, our first step has been the development of a model, i.e., a class diagram of our application domain (click image to enlarge):
The dynamic behavior of a system then can be specified via so-called story patterns. For instance, the story diagram for the firing of a deterministic finite state machine is shown in the screenshot below:
After getting used to modeling with story patterns, which by the way presented itself to be quite intuitional, we learned that simple, figurative designs produced powerful code. After all, we did not write one line of code ourselves throughout the whole phase of the practical. Having needed adequate time to adjust to a new way of thinking, i.e. modeling, Fujaba enabled us for the rapid production of a stable solution to our problem.
Although allowing us to solve our task conveniently, there were some drawbacks working with Fujaba as well. The integrated diagram editor was sometimes hard to use, e.g., the handles on associations were too small to move them comfortably and the automatic layout often scrambled diagrams even more rather than clarifying them. Manual adaption of the generated code, although not needed in the end, was not even possible. Furthermore, the integrated versioning support CoObRA did not work at all on our systems. Due to the fact that diagrams grow rather quickly utilizing story driven modeling we believe that the use of Fujaba is limited to small or medium-sized projects.
In summary, Fujaba offered us a quick, well visualized way to implement the simulation of finite automata. The simulation of these automata using the integrated DOBS is more practical than beautiful, but proved to be sufficient for our purposes.
Fortunately, a student of the Fujaba-group, Thore Backen, has been willing to provide us a summary of his insights about the practical work with Fujaba:
In the second stage of our practical on graph and model transformation our group of three students had the objective to produce the simulation of deterministic as well as nondeterministic finite automata using Fujaba.
At first we needed quite some time (approx. 15 h) of practice to get used to Fujaba. To be honest we had some trouble deciding which version to use for our task. In the end we chose the Fujaba Tool Suite Version 4.3.2. The main reason why we chose this version over the more up-to-date version 5.0.4 was the easy-to-use already integrated Dynamic Object Browsing System (DOBS) in the older version. Since we wanted to use DOBS to simulate our finite automata, the outsourced eDOBS (in Fujaba 5) needing Eclipse did not offer any obvious advantages.
As usual, our first step has been the development of a model, i.e., a class diagram of our application domain (click image to enlarge):
The dynamic behavior of a system then can be specified via so-called story patterns. For instance, the story diagram for the firing of a deterministic finite state machine is shown in the screenshot below:
After getting used to modeling with story patterns, which by the way presented itself to be quite intuitional, we learned that simple, figurative designs produced powerful code. After all, we did not write one line of code ourselves throughout the whole phase of the practical. Having needed adequate time to adjust to a new way of thinking, i.e. modeling, Fujaba enabled us for the rapid production of a stable solution to our problem.
Although allowing us to solve our task conveniently, there were some drawbacks working with Fujaba as well. The integrated diagram editor was sometimes hard to use, e.g., the handles on associations were too small to move them comfortably and the automatic layout often scrambled diagrams even more rather than clarifying them. Manual adaption of the generated code, although not needed in the end, was not even possible. Furthermore, the integrated versioning support CoObRA did not work at all on our systems. Due to the fact that diagrams grow rather quickly utilizing story driven modeling we believe that the use of Fujaba is limited to small or medium-sized projects.
In summary, Fujaba offered us a quick, well visualized way to implement the simulation of finite automata. The simulation of these automata using the integrated DOBS is more practical than beautiful, but proved to be sufficient for our purposes.
April 03, 2008
GT-VMT, MetaEdit+
I am back from Budapest where I have attended the GTVMT workshop and ETAPS. The feedback to my talk about graph parser combinators was positive and I even have been pointed to very useful and interesting references later. I will provide a more detailed post about my stay in Budapest soon.
Let me just point you to an up to date news from the DSL tool market. I have already written about the MetaEdit+ tool. You might be interested in the news that this week MetaCase rolled out a new version of their tool as announced by the company's sales manager James Hammond. I have not tried the new version yet, so if somebody has tried it already: please leave a comment on your impression.
By the way, the CEO of MetaCase, Dr. Tolvanen, gave an interesting talk (slides) about the state of the art in the domain of DSL tools and the great speed ups that are possible with proper tool support. A main practical issue I have not been aware of before was the problem of a changing metamodel. According to Tolvanen MetaEdit+ is quite good at dealing with this issue due to its repository approach. I wonder how other tools deal with this issue. Any comments on this?
Let me just point you to an up to date news from the DSL tool market. I have already written about the MetaEdit+ tool. You might be interested in the news that this week MetaCase rolled out a new version of their tool as announced by the company's sales manager James Hammond. I have not tried the new version yet, so if somebody has tried it already: please leave a comment on your impression.
By the way, the CEO of MetaCase, Dr. Tolvanen, gave an interesting talk (slides) about the state of the art in the domain of DSL tools and the great speed ups that are possible with proper tool support. A main practical issue I have not been aware of before was the problem of a changing metamodel. According to Tolvanen MetaEdit+ is quite good at dealing with this issue due to its repository approach. I wonder how other tools deal with this issue. Any comments on this?
March 06, 2008
Visual Week and VL/HCC 2008
From September 15th, 2008 on the Visual Week will take place in Germany near Munich. Our department is organizing this event headed by Prof. Dr.-Ing. Mark Minas. One of the main conferences is the VL/HCC 2008, the 2008 IEEE Symposium on Visual Languages and Human-Centric Computing. Paper submissions here are due to March 11th, i.e., next Tuesday.
I hope to see you in Herrsching.
I hope to see you in Herrsching.
February 22, 2008
Bidirectional synchronization
The effort of blogging already seems to pay off as I start to get mails with interesting references to related work. Yesterday Miguel Garcia pointed me to his workshop paper Bidirectional Synchronization of Multiple Views of Software Models.
What I liked about the paper is its discussion of related work and the references that give a very comprehensive overview of the state of the art in this field, i.e., Miguel discusses with a bird's eye view approaches to solve the view update problem like program inversion, lenses, QVT, ATL and TGG (triple graph grammars, to be discussed in the future). Indeed, in my research on diagram editors I currently have to deal with the problem of reflecting changes on the abstract syntax level back to concrete syntax (and vice versa, of course).
Abstract of Miguel's paper:
Current best-practices for defining Domain-Specific Modeling Languages call for metamodeling techniques, which do not take into account the future use of such languages in multiview design environments. Tool implementers have tried a variety of ad-hoc techniques to maintain views in-synch, with modest results. To improve this state of affairs, a declarative approach is elaborated to automate multiview synchronization, building upon existing metamodeling techniques and recent advances in the field of function inversion for bidirectionalization. An application of these ideas to EMOF and a discussion of the resulting Declarative MVC software architecture are also provided. A significant benefit of the approach is the resulting comprehensive solution to a recurrent problem in the software modeling field.
I look forward to further external contributions!
What I liked about the paper is its discussion of related work and the references that give a very comprehensive overview of the state of the art in this field, i.e., Miguel discusses with a bird's eye view approaches to solve the view update problem like program inversion, lenses, QVT, ATL and TGG (triple graph grammars, to be discussed in the future). Indeed, in my research on diagram editors I currently have to deal with the problem of reflecting changes on the abstract syntax level back to concrete syntax (and vice versa, of course).
Abstract of Miguel's paper:
Current best-practices for defining Domain-Specific Modeling Languages call for metamodeling techniques, which do not take into account the future use of such languages in multiview design environments. Tool implementers have tried a variety of ad-hoc techniques to maintain views in-synch, with modest results. To improve this state of affairs, a declarative approach is elaborated to automate multiview synchronization, building upon existing metamodeling techniques and recent advances in the field of function inversion for bidirectionalization. An application of these ideas to EMOF and a discussion of the resulting Declarative MVC software architecture are also provided. A significant benefit of the approach is the resulting comprehensive solution to a recurrent problem in the software modeling field.
I look forward to further external contributions!
February 21, 2008
Practical - end of first stage
The first stage of our practical on graph and model transformation is already finished. The students have produced nice class diagram editors with all three meta-CASE-tools. As expected, all tools have particular strengths and weaknesses.
GMF:
DSL Tools:
MetaEdit+:
The students have learned that domain specific languages can be realized with modern tools very conveniently.
GMF:
- Strong points: open source, integration with popular eclipse platform and EMF, wizards, tutorials
- Weak points: difficult to realize nested packages, internal metamodels not documented very well (students sometimes had to consult the generated code to understand the differences of particular options), graphics cannot be defined in a visual way
DSL Tools:
- Strong points: predefined skeletons, good documentation (in particular the book "Domain-Specific Development with Visual Studio DSL Tools"), good error messages, visual mapping between abstract and concrete syntax
- Weak points: generated editors need Visual Studio, arrangement of the metamodel as a tree is rather confusing, nested packages difficult to realize
MetaEdit+:
- Strong points: good documentation, many examples, changes to graphics and metamodel on the fly, integrated code generator, nested packages possible
- Weak points: generated editors need MetaEdit+, project portability, too many open dialogs in particular situations
The students have learned that domain specific languages can be realized with modern tools very conveniently.
February 12, 2008
VlDesk
Let me today write about a tool I have tried recently: VlDesk.
Compared to other research tools VlDesk outstands with a comprehensive website (including demos) and smooth installation procedure. The underlying formalism, eXtended Positional Grammars (XPG), is well-studied (see further reading) and allows for efficient parsing. In fact, the parser is generated using the well-known Yacc parser generator. All parts of this meta-CASE-tool are well-integrated. For instance, there is a graphical symbol editor and productions can also be defined in a visual way. One can easily switch between the perspectives for language design (Visual Grammar Environment) and usage (Visual Programming Environment).
However, there are also some weak points. For instance, the system is only available for Windows systems. Further, some examples and particular features have not worked properly.
All in all, VlDesk is a system worth to try out, in particular after reading the papers listed below.
Further reading:
Compared to other research tools VlDesk outstands with a comprehensive website (including demos) and smooth installation procedure. The underlying formalism, eXtended Positional Grammars (XPG), is well-studied (see further reading) and allows for efficient parsing. In fact, the parser is generated using the well-known Yacc parser generator. All parts of this meta-CASE-tool are well-integrated. For instance, there is a graphical symbol editor and productions can also be defined in a visual way. One can easily switch between the perspectives for language design (Visual Grammar Environment) and usage (Visual Programming Environment).
However, there are also some weak points. For instance, the system is only available for Windows systems. Further, some examples and particular features have not worked properly.
All in all, VlDesk is a system worth to try out, in particular after reading the papers listed below.
Further reading:
- Building syntax-aware editors for visual languages, Costagliola, G., Deufemia, V., Polese, G., Risi, M., 2005, Journal of Visual Languages & Computing
- Extended Positional Grammars, Costagliola, G., Polese, G., 2000, Proceedings of the 2000 IEEE International Symposium on Visual Languages
January 30, 2008
Graph Parser Combinators
This post is about a topic I am particularly interested in, namely Graph Parser Combinators. Among functional programmers parser combinators are very popular. They are fun to play with and a lot of research has been done to make them more practically relevant (mainly by improving efficiency). There is an abundance of different libraries, e.g., the HUT Parser Combinators, Parsec or polyparse.
Since I have to deal a lot with graph-grammar based visual languages (remember my post about the diagram editor generator DiaGen), the question arose whether parser combinators may also be useful for graph parsing. So I have implemented a prototypical Haskell library and several exemplary parsers. And although there are some fundamental differences between graph and string parsing it worked astoundingly well.
At IFL 2007 I have introduced this approach. Several people have asked me whether graph parser combinators actually are powerful enough to describe highly-meshed graph languages like, e.g., Flowgraphs, a graph language representing structured Flowcharts. Such languages can naturally be described using hyperedge replacement grammars. The answer is yes, and I am going to present a solution to this problem at GT-VMT 2008.
Further reading:
Since I have to deal a lot with graph-grammar based visual languages (remember my post about the diagram editor generator DiaGen), the question arose whether parser combinators may also be useful for graph parsing. So I have implemented a prototypical Haskell library and several exemplary parsers. And although there are some fundamental differences between graph and string parsing it worked astoundingly well.
At IFL 2007 I have introduced this approach. Several people have asked me whether graph parser combinators actually are powerful enough to describe highly-meshed graph languages like, e.g., Flowgraphs, a graph language representing structured Flowcharts. Such languages can naturally be described using hyperedge replacement grammars. The answer is yes, and I am going to present a solution to this problem at GT-VMT 2008.
Further reading:
- Graph Parser Combinators, Mazanek, S., Minas, M., 2008, Proceedings of IFL 2007, to appear
- Parsing of Hyperedge Replacement Grammars with Graph Parser Combinators, Mazanek, S., Minas, M., 2008, Proceedings of GT-VMT 2008, to appear
January 16, 2008
VEX
VEX (visual expressions) is a VL for the representation of lambda expressions.
The picture provides two exemplary VEX expressions. The upper one represents the lambda term λy.(x y), i.e. a function that applies the free variable x to its argument. The other one represents the term λx.λy.(x y). Here variable x also is bound by an abstraction.
VEX is a very simple visual language in the sense that there are only three different visual components: the circle, the line and the arrow. The whole meaning of an arrangement of these components is determined by their spatial relations.
A lambda term is inductively defined. It is either a variable, an application of one lambda term to another one or an abstraction that consists of the variable to be bound and its body, i.e. the scope of the particular variable. For these three cases visual representations are defined.
In VEX a single variable is represented by a circle. It does not need to have a name as in the textual form. An abstraction is represented by a circle with a smaller circle internally tangent to it. Elements contained in the bigger circle are part of the abstractions body whereas the smaller circle represents the parameter. The application of an expression e1 to an expression e2 is represented by an arrow that points from e1 to e2. Further the outermost circles of e1 and e2 have to touch each other.
Variables can be bound via lines to parameters of abstractions. If they are free they need to be bound to a free variable circle that must not be inside another circle. The meaning for this is that it has to be possible to identify occurrences of the same free variable.
The main purpose of VEX is to simplify the teaching of lambda calculus. For this sake it is very benefitial that variable bindings are made explicit by lines and not implicitly hidden behind equal names. In particular one does not have to deal with the difficulties resulting from overlapping scopes and thus the need for alpha conversion.
Syntax analysis for VEX diagrams is quite difficult. There are ambiguities (i.e. to circles connected by a line, which one is the variable and which one the free variable circle?) and a lot of combinations of particular patterns and their spatial arrangement. However, there is a quite flexible DiaGen editor for VEX where the language is defined via a hyperedge replacement grammar. However, as you see in the picture "two different kinds of circles" need to be defined restricting the freedom of the user somewhat.
The picture provides two exemplary VEX expressions. The upper one represents the lambda term λy.(x y), i.e. a function that applies the free variable x to its argument. The other one represents the term λx.λy.(x y). Here variable x also is bound by an abstraction.
VEX is a very simple visual language in the sense that there are only three different visual components: the circle, the line and the arrow. The whole meaning of an arrangement of these components is determined by their spatial relations.
A lambda term is inductively defined. It is either a variable, an application of one lambda term to another one or an abstraction that consists of the variable to be bound and its body, i.e. the scope of the particular variable. For these three cases visual representations are defined.
In VEX a single variable is represented by a circle. It does not need to have a name as in the textual form. An abstraction is represented by a circle with a smaller circle internally tangent to it. Elements contained in the bigger circle are part of the abstractions body whereas the smaller circle represents the parameter. The application of an expression e1 to an expression e2 is represented by an arrow that points from e1 to e2. Further the outermost circles of e1 and e2 have to touch each other.
Variables can be bound via lines to parameters of abstractions. If they are free they need to be bound to a free variable circle that must not be inside another circle. The meaning for this is that it has to be possible to identify occurrences of the same free variable.
The main purpose of VEX is to simplify the teaching of lambda calculus. For this sake it is very benefitial that variable bindings are made explicit by lines and not implicitly hidden behind equal names. In particular one does not have to deal with the difficulties resulting from overlapping scopes and thus the need for alpha conversion.
Syntax analysis for VEX diagrams is quite difficult. There are ambiguities (i.e. to circles connected by a line, which one is the variable and which one the free variable circle?) and a lot of combinations of particular patterns and their spatial arrangement. However, there is a quite flexible DiaGen editor for VEX where the language is defined via a hyperedge replacement grammar. However, as you see in the picture "two different kinds of circles" need to be defined restricting the freedom of the user somewhat.
January 11, 2008
Practical on Graph and Model transformation
I have already written several posts regarding our seminar on graph and model transformation. The next three months we conduct a practical on graph and model transformation on top of this seminar.
The first task is the implementation of an editor for UML class diagrams (restricted to classes, attributes, generalizations and a limited notion of associations). However, the editor should, of course, not be implemented from scratch. Rather it should be generated using a meta-CASE-tool. We have divided the students in three small groups. Each group has to use a different meta-CASE-tool. The tools to use are:
Whereas MetaEdit+ and DSL Tools are commercial products, GMF is open source. All three tools have in common that they are of sufficient quality and well-documented, making them suited for our practical.
The first task is the implementation of an editor for UML class diagrams (restricted to classes, attributes, generalizations and a limited notion of associations). However, the editor should, of course, not be implemented from scratch. Rather it should be generated using a meta-CASE-tool. We have divided the students in three small groups. Each group has to use a different meta-CASE-tool. The tools to use are:
- MetaEdit+ by MetaCase, I already have written about MetaEdit+
- GMF, an eclipse project
- DSL Tools by Microsoft, a framework based on MS Visual Studio
Whereas MetaEdit+ and DSL Tools are commercial products, GMF is open source. All three tools have in common that they are of sufficient quality and well-documented, making them suited for our practical.
January 06, 2008
ER diagrams
In the last days I had to deal a lot with entity relationship (ER) diagrams, because I have to conduct exercises for a database course. ER diagrams have been developed in the seventies by Chen and are a widely used visual language for the conceptual design of database systems. Further in my last post I described the tool AToM3 whose main formalism is ER.
This figure is from the wikipedia article and shows an exemplary ER diagram.
To my surprise there seem to be no freeware or open source tools that allow the drawing of ER diagrams in the commonly used notations, i.e. the original one by Chen plus some conservative extensions like Min-Max etc. I have tried several of the free tools referenced in the wikipedia article on ER like BrModelo (only in portuguese language), Ferret (platform-specific and not very feature-rich), MySQL workbench (proprietary notation and MySQL-focused) and even some of the commercial ones like SmartDraw and MS Visio. But none of them is as feature-rich as one might expect in the context of such an old visual language.
Using DiaMeta, for instance, it would be possible to specify a compliant and feature-rich ER editor in a very short time. One even could apply a EMF-based model transformation with, e.g., ATL to derive the corresponding database schema.
Did I miss a tool that is freely available, provides an interface in English language and conforms to the widely-adopted concrete syntax? Any further references are greatly appreciated.
This figure is from the wikipedia article and shows an exemplary ER diagram.
To my surprise there seem to be no freeware or open source tools that allow the drawing of ER diagrams in the commonly used notations, i.e. the original one by Chen plus some conservative extensions like Min-Max etc. I have tried several of the free tools referenced in the wikipedia article on ER like BrModelo (only in portuguese language), Ferret (platform-specific and not very feature-rich), MySQL workbench (proprietary notation and MySQL-focused) and even some of the commercial ones like SmartDraw and MS Visio. But none of them is as feature-rich as one might expect in the context of such an old visual language.
Using DiaMeta, for instance, it would be possible to specify a compliant and feature-rich ER editor in a very short time. One even could apply a EMF-based model transformation with, e.g., ATL to derive the corresponding database schema.
Did I miss a tool that is freely available, provides an interface in English language and conforms to the widely-adopted concrete syntax? Any further references are greatly appreciated.
AToM3
I am back from vacation and continue reporting about the seminar talks. On 29.11.07 we discussed the tool AToM3, a research project of Hans Vangheluwe and Juan de Lara.
AToM3 is an acronym for "A Tool for Multi-formalism and Meta-Modelling". From my point of view its main features are:
AToM3 is a very interesting research tool. There are a lot of publications discussing particular aspects of the tool and its theoretical foundations. However, it seems not to be ready for production settings yet. Parts of the documentation are outdated and even the Hello-World example caused several exceptions (but I got it up and running) and useless warnings like "Bad things will happen". However, as a researcher in the area of graph transformation you have to deal with the tool or at least with the publications by all means.
Further reading:
AToM3 is an acronym for "A Tool for Multi-formalism and Meta-Modelling". From my point of view its main features are:
- meta CASE-tool: specify a visual language (formalism) and generate an environment
- homogeneous view on metamodelling: define models in a particular formalism and use them directly as a formalism again, for instance, a petri net formalism can be defined using the ER (entity relationship) formalism and in the following real petri nets can be drawn; most of AToM3 can be bootstrapped (even user interaction)
- model transformations: internally models are represented as graphs, graph grammars can be defined and used for model transformation
AToM3 is a very interesting research tool. There are a lot of publications discussing particular aspects of the tool and its theoretical foundations. However, it seems not to be ready for production settings yet. Parts of the documentation are outdated and even the Hello-World example caused several exceptions (but I got it up and running) and useless warnings like "Bad things will happen". However, as a researcher in the area of graph transformation you have to deal with the tool or at least with the publications by all means.
Further reading:
Subscribe to:
Posts (Atom)