A Distributed JPetStore

The MyBatis-JPetStore is a well known example application to demonstrate the stripes framework. It is a servlet-based web application which is widely used in academia to test and evaluate approaches. The preference for the JPetStore originates from its minimalistic design. Yet it is also functional complete, as it implements all typical aspects of a shopping system, including a catalog, a search function, a shopping cart, a customer management, and a sales and order management.

While the JPetStore is a neat application for software engineering purposes, it is not designed as a distributed system. Therefore, we recently forked the original JPetStore and created a distributed application out of it. You may find the distributed JPetStore in our github repository (https://github.com/research-iobserve/jpetstore-6).

We support three branches in this repository for different purposes:

  • distributed-with-presentation-layer contains the distributed version of the JPetStore without monitoring instrumentation
  • kieker-monitoring contains a version instrumented with Kieker monitoring probes
  • iobserve-monitoring contains a version instrumented with extended iObserve monitoring probes which also log request parameters

The current design of the distributed JPetStore is as follows:

Architecture and deployment of the distributed JPetStore

Architecture and deployment of the distributed JPetStore

The account service, the catalog service and the order service all contain their own database based on HSQLDB. These could be replaced by another database implementing JDBC. As the order and cart component share the database, they are deployed on the same service.

Our setup comes with proper Docker files for each service and a Docker Compose script to execute the complete shop system. You may find all sources and an introductory readme on github (https://github.com/research-iobserve/jpetstore-6).


Extending the Entropy-based Software Analysis

In Analyzing structural properties of Java applications, we introduced how to use our analysis tool to evaluate Java applications. However, the underlying metrics are applicable to other sources (models, code, data). We demonstrated this capability by developing model to hypergraph adapters for Ecore metamodels, PCM models, and GECO transformation composition models. In this article we, illustrate how to create your own extension of the analysis tool.


The analysis tooling is based on Eclipse (Neon) and utilizes the Eclipse extension mechanism.  Therefore, it is useful to understand basic principles and concepts of Eclipse extensions, which can be found here.

The source code of the analysis tool can be found at github.

Basic Concepts

The analysis tool comes with a set of metrics performing analyses based on a hypergraph. A hypergraph comprises of nodes and edges, where a node can be connected to none or more edges, and an edge can be connected to multiple node. In addition we can modularize the hypergraph by grouping nodes in modules. Such hypergraph is then called a modular hypergraph. The metamodel for hypergraphs can be found in Hypergraph.ecore.

To connect the hypergraph metrics to any kind of software artifact, a mapping transformation is required. Such transformation queries the source model representing a software artifact and constructs a hypergraph utilizing the query results. The hypergraph is the target model of the transformation. Therefore, before creating an extension, you need a way to query your source model, and an idea how to map the structure of your software artifact to a hypergraph or modular hypergraph.

Creating an Extension

As reference, you may use the Ecore extension for the analysis tool, which can be found here. An extension comprises at least of four files: the transformation, the analysis job, which is used to execute transformation and the metrics, an Activator (which I am not totally sure is necessary anymore), and an analysis job provider.

Analysis Job Provider

The analysis job provider must implement the IAnalysisJobProvider interface which declare two methods:

interface IAnalysisJobProvider {
	 * Returns the file extension the job is designed for.
	def String getFileExtension()

	 * Creates an instantiated and configured analysis job.
	 * @param project the project where the file belong to
	 * @param file the file containing the model or information where to find the model
	 * @param shell refers to the display which is required to inform the user on the progress
	 * @return returns an analysis job
	def AbstractHypergraphAnalysisJob createAnalysisJob(IProject project, IFile file, Shell shell)

The file extension is used to define the content type this extension supports. The second method is used to instantiate the Eclipse job used to execute the source model transformation and the analysis.

Analysis Job

The analysis job class must inherit the basic metric calculations from AbstractHypergraphAnalysisJob class, which provides four metrics for hypergraphs and modular hypergraphs:

  • calculateSize to calculate the size of a hypergraph
  • calculateComplexity to calculate the complexity of a hypergraph
  • calculateCoupling to calculate the inter module complexity of a modular hypergraph
  • calculateCohesion to calculate the cohesion of a modular graph. As cohesion is a ratio metric which requires a maximal interconnected graph as basis, it first transforms the hypergraph into a graph representation and applies then its graph cohesion metric.

All three methods take three arguments, as hypergraph, an Eclipse progress monitor, and a reference to the result model. The result model is provided by AnalysisResultModelProvider.INSTANCE.

The run method of an analysis job follows a similar structure, as depicted in the following listing.

override protected run(IProgressMonitor monitor) {
                /** Create a new resource set. */
		val resourceSet = new ResourceSetImpl()
		/** Load the artifact. */
		val Resource source = resourceSet.getResource(URI.createPlatformResourceURI(file.fullPath.toString, true), true)

		/* Test if the artifact contains any data. */
		if (source.contents.size > 0) {
                        /** Get the result model handler. */
			val result = AnalysisResultModelProvider.INSTANCE

			/** Obtain the model from the resource. */
			val model = source.contents.get(0) as EPackage

			/** Transform the model into a (modular) hypergraph. */
			val emfMetaModel = new TransformationEMFInstanceToHypergraph(monitor)
			/** Generate some general statistics about the hypergraph. */
			result.addResult(project.name, "number of modules", emfMetaModel.result.modules.size)
			result.addResult(project.name, "number of nodes", emfMetaModel.result.nodes.size)
			result.addResult(project.name, "number of edges", emfMetaModel.result.edges.size)
			/** Calculate the metrics. */
			calculateSize(emfMetaModel.result, monitor, result)
			calculateComplexity(emfMetaModel.result, monitor, result)
			calculateCoupling(emfMetaModel.result, monitor, result)
			calculateCohesion(emfMetaModel.result, monitor, result)	
		} else {
			MessageDialog.openError(this.shell, "Model empty", "The selected resource is empty.")
		return Status.OK_STATUS	

It is of course possible to execute the different metrics multiple times, using different hypergraphs. For example, an ECore metamodel could be analyzed completely mapping all classes to nodes and all references to edges, and this could be limited to the containment hierarchy. In such cases, you need multiple transformations providing the different hypergraphs for analysis, and you must execute the metrics for each hypergraph.

Model to Hypergraph Transformation

The core of an extension is the model to hypergraph mapping transformation. It must adhere to the AbstractTransformation class, an extension of the IGenerator interface of GECO. The abstract class defines a general result property result, a general constructor, and the abstract method workEstimate. The latter is used to estimate the execution effort of the transformation before execution, which is used to setup the progress bar.

The core method of the transformation is generate which implements the transformation.

override generate(EPackage input) {
		result = HypergraphFactory.eINSTANCE.createModularHypergraph
		return result

In case of a transformation, which only produces a hypergraph and not a modular hypergraph, the first line must be adjusted accordingly.

To construct a hypergraph, the analysis tooling provides a set of construction methods in a factory called HypergraphCreationFactory. The factory, defines eight operations:

  • createNode(Hypergraph hypergraph, String name, EObject element)
  • createNode(ModularHypergraph hypergraph, Module module, String name, EObject element)
  • createEdge(Hypergraph hypergraph, Node source, Node target, String name, EObject element)
  • createModule(ModularHypergraph hypergraph, String name, EObject element)
  • deriveNode(Node node)
  • deriveEdge(Edge edge)
  • deriveModule(Module module)
  • createUniqueEdge(ModularHypergraph hypergraph, Node source, Node target)

The operations always return the created object, they have in additional side effects. The create-operations add the created object to the specificed hypegraph, and the derive-operations are used when a hypergraph is derived from another. The single operations create a new node, edge, or module and let it refer to the predecessor element.

The parameters have the following implications:

  • hypergraph, always refers to the hypergraph where the node, edge or module belongs to.
  • name refers to the node, module, and edge name.
  • element refers to the object the node, module, and edge represents. For example, a node represents a ECore metamodel class, then this class is the element to be passed to the method.


We illustrated above how a new kind of artifact can be added to the analysis tool, via an Eclipse plug-in. We suggest to use one of the existing implementations as reference beside this post. As we kept the extension mechanism as simple as possible, it is easy to add your own transformation. To get started we provide here some links as reference: