Structure101 Workspace IntelliJ IDEA and Gradle

The IDEA gradle import creates module compile output paths that are different to the gradle build output paths. Structure101 Workspace uses the IDEA module compile output paths to find the bytecode. So, running a gradle build will not propagate changes to the code into the Structure Map.

A workaround is to run the IDEA build to generate bytecode into the module compile output paths. But this only works if the setting “Delegate IDE build/run actions to gradle” (under Project Settings | Build, Execution, Deployment | Build Tools | Gradle | Runner) is unchecked. And it is not ideal since there are two sets of bytecode generated.

An alternative solution is to use the gradle Idea plugin to override the module compile output paths when the project is imported into IDEA.

Add the following to the root build.gradle file: –
(Use directory names appropriate for your gradle build)

apply plugin: 'idea'

subprojects {
  idea {
   module {
     outputDir file("$buildDir/classes/java/main")
     testOutputDir file("$buildDir/classes/java/test")
     inheritOutputDirs = false

Import your gradle project into IDEA in the usual way. You can use either the file or folder based format for the project

If you check “Create separate module per source root” in the import dialog the import will create IDEA modules for each source root in the  gradle module. The module names will be suffixed, usually with _main and _test. Having separate modules for test code is useful when extracting modules from the monolith. (After the import completes you can toggle this setting off and on in the gradle settings dialog)

The separate test modules allow the test code to be excluded from the Structure Map by adding excludes in the Structure101 Workspace settings dialog (<module_name>.*). This patterns excludes the code and the module from the Structure Map.

After import the module compile output paths will be set to the gradle output paths that you added to the build.gradle file. Building with gradle at the command line will now update the bytecode that Workspace is referencing.

If you wish to trigger the gradle build using the IDEA build commands the “Delegate IDE build/run actions to gradle” setting can be checked.


The latest version of the Workspace plugin for IDEA now has an additional option on the settings page.

Include ‘Test Source Folders’ when populating Structure Map

This option is off by default. Any code marked as test in the IDEA module settings Sources tab (see below) will not be included in the Structure Map model if this new option is unchecked. When checked, additional top level modules will be created in the Structure Map containing test code. These modules will have the same name as the main module with the configurable suffix (-test by default).

Note that a Studio project created from the Workspace .hsw file will show the same modules as Workspace. So if the Test option is checked then Studio will also show the test modules.

Module extractions with many tens or hundreds of violations to resolve will benefit from preparatory planning and estimation.

This is the fourth post in a series that will explore the challenges of migrating a monolithic code base to a modular architecture

Series links:
Post 1 – Migrating from Monolith to Modular
Post 2 – Monolith to Modular – The Extract Module Use Case
Post 3 – Monolith to Modular – Managing Violations
Post 4 – Monolith to Modular – Sizing and Estimating Scope (this post)


In our previous post we described the use of Structure101 Studio and Workspace to identify, manage and resolve dependencies that violate a target architecture. In this post we describe use of Structure101 Studio’s Dependency Breakout to size and scope the refactoring effort.

In the example below, three packages have cyclic dependencies.

Attempts to move code responsible for remote communication into a new comms module resolves the feedback dependency from seaview to assemblies. But another is created from assemblies to comms which prevents creation of the new module (the dotted arrow).

In the following LSM, the feedback dependency is selected (highlighted in blue) and the details of the interfaces, classes and members that comprise the dependency are shown in the breakout beneath. (Note that in the Structure101 Project Properties dialog, Granularity is set to detail). In this example there are 40 code references contributing to the feedback dependency.

The breakout shows the size and scope of the refactoring needed to resolve the feedback dependency.

It might be tempting to bulk import a story per dependency into an agile planning tool. But careful grouping of the violations will yield fewer stories and a shorter time to deliver them. The number and nature of the stories is driven by the type of the dependencies and how they are to be fixed and tested, and by whom:

  • Will the fix be at the from or to end of the dependency – or both?
  • Which team is responsible for the code that will be refactored?
  • Will the refactoring be done by incorporating the stories into a team’s existing backlog?
  • Or is there a re-engineering team in place tasked with the dependency removal?
  • How will the changes be tested?

The detail list in the lower pane of the dependency breakout can be exported via the context menu (Copy | Copy All) and pasted into a spreadsheet.

Use this raw list as input to the backlog creation. Ordering by the from or to columns helps to identify dependencies that can be grouped together into a single story.

Structural refactoring should have no effect on application functionality. The only way to test the changes is through regression testing. If the application does not have an automated regression suite the cost of manual testing can far exceed the cost of changing the code. The worst case scenario is one in which the velocity of the team is determined not by the code changes but by the manual regression testing effort.


As an example, consider the situation where a method in an implementation class is being called directly. The method isn’t suitable for inclusion in the interface because it exposes the module’s internal model in the return type. A new method with a suitable new class for the return type must be implemented which delegates to the existing method and converts the internal model to the new return type. The first violation probably appeared when a developer under pressure to deliver a change in functionality took the shortcut of referencing the implementation class, rather than implementing the delegating method. Once it exists, other calls to the method can appear across the code base. Another developer under pressure takes the same shortcut or worse,  blindly copies the call to the implementation class.

One option is to group all of these calls together in a single story. When playing the story the developer will implement the missing delegating method and then refactor all the offending code to use the interface and its new method. But if the calls are spread across the code base this one story could affect many different areas of functionality. Without an automated test suite such a change would result in a significant manual testing effort.

So another option is to group violations into stories for each functional area. A story to implement the delegating method must be played first. When sprints are planned the stories affecting the same functional area are played together such that many changes can be regression tested together.

Whichever way the work is partitioned up and assigned to developers, each story should be a discrete piece of work that can be completed and deployed independently. This allows the stories to be played at a pace to suit the team(s) involved. Should priorities change the refactoring can be halted without losing any stories already played.


Monolith to Modular, Part 3 – Managing Violations

Dependencies that violate your target module structure need to be resolved before code can be extracted from the monolith into the new module.

This is the third post in a series that will explore the challenges of migrating a monolithic code base to a modular architecture

Series links:
Post 1 – Migrating from Monolith to Modular
Post 2 – Monolith to Modular – The Extract Module Use Case
Post 3 – Monolith to Modular – Managing Violations  – (this post)
Post 4 – Monolith to Modular – Sizing and Estimating Scope


In our previous post we described the simplest “Extract Module” scenario, in which the new module has no dependencies on the remaining monolith. This post describes the scenario where the proposed new module and remaining monolith are mutually dependent.

The circular dependency has to be resolved by removing the dependencies that violate the intended module architecture, before code can be physically moved from the monolith into the new module (build systems don’t like cycles). The refactoring to resolve these violations can be complex and time consuming, taking weeks or even months to complete.

Which leads to potential issues: –

  • What if new violations are introduced whilst the existing ones are being removed?
  • How will the developers know when all the violations are resolved?
  • How would a developer from another team know that they have created a new violation?

The combined use of 1) an Action List, 2) a Structure Spec, and 3) Structure101 Workspace is one way to resolve these issues. The Action List plus Structure Spec captures the target module architecture in Structure101 Studio. Publication of the Actions and Spec communicates this architecture to the wider team via Structure101 Workspace in their IDE. Any violations to the target architecture are highlighted in the Structure Map. The Structure Map allows a developer to quickly focus on existing code that violates the target architecture. “Spotlighting” can be used to filter the Structure Map to just the relevant code. If code changes cause new violations, these will also be highlighted in the Structure Map.

In the example below, the modules utilities, logging and nalpeiron have already been extracted from core (previous post).  For this new extraction, the remaining core has been split into 4 new modules: assemblies, comms, studio and data-models. The simulation of these new modules in Studio (Structure tab) has resulted in feedback dependencies shown as dotted lines.


The four new modules are then added to the Structure Spec which (by default) shows the same feedback dependencies. In the Structure Spec we refer to them as violations. The Action List named Module Extract is shared (indicated by the hand icon). This means the actions that created the new modules (mostly moving packages and classes) can be published to a Structure101 repository and imported into the Structure101 Workspace IDE plugin (Eclipse/IntelliJ). The Structure Spec will be published at the same time.


In the Publish dialog Publish Spec and Publish action list must both be checked. The repository project is named module-extraction.


Switching to Workspace in the IDE now…

In the Workspace settings (Eclipse in this example) the Source of specs is pointed to the repository and the module-extraction project selected.


A full refresh of the Structure101 Workspace’s Structure Map loads and applies the Action List and Structure Spec. The violations are shown as dotted red lines on the resulting Structure Map inside the IDE:


A developer working to resolve the violations can drill into the map and follow the red lines to reveal the code item causing the violations:


Alt-clicking on the code item navigates to the item in the IDE source editor where the root cause can be examined and/or refactored out.

If a new dependency is introduced that violates the architecture, it is shown in the Structure Map (again, inside the IDE) as a purple line. The names of the packages and class causing the violation will also be purple.


And a developer can drill down following the purple elements to find the offending code:


And then use spotlighting to filter the Structure Map to the relevant/dependent code:



Capturing the planned changes in the module structure and making it available in the IDE in this way gives the wider team an understanding of the to-be architecture as they continue ongoing development. And it gives the developers working to resolve the violations a dynamic navigable view of the offending code. A ‘to-do’ list of refactoring presented visually with all the attendant navigation and spotlighting functionality the structure map provides.

When all the violating dependencies have been removed, the final task is to create the module and move the source code into it from the monolith. The developer making these changes can follow the Action list step by step to implement the module creation and make the class moves that were simulated in Structure101 Studio.

In the next post we will describe how the violating dependencies can be used to build a backlog of work for planning and estimation.

Monolith to Modular, Part 2 – The Extract Module Use Case

The basic use case of the monolith to modular strategy is “extract a new module from a monolithic code base”

This is the second post in a series that will explore the challenges of migrating a monolithic code base to a modular architecture

Series links:
Post 1 – Migrating from Monolith to Modular
Post 2 – Monolith to Modular – The Extract Module Use Case  – (this post)
Post 3 – Monolith to Modular – Managing Violations
Post 4 – Monolith to Modular – Sizing and Estimating Scope


In our previous post we introduced the strategy “Transform the monolith through incremental extraction of modules”. This post describes the basic use case of this strategy – “Extract a new module from a monolithic code base”

There are a number of scenarios that this use case supports. In most cases the classes cannot be moved directly into the new module because this would result in cyclic module dependencies, which are not supported by most module technologies (Eclipse, IntelliJ, Maven, Java 9, …).

In reverse order of complexity the variants are:

  • Donor module is heavily tangled
  • Classes that will end up in extracted module are distributed across the donor packages
  • User want’s to isolate and restrict access to an existing module via an interface module, cutting off direct access to original module
    • User wants to retire existing implementation in favour of a new module
  • Extracted module is created from one or more packages that are at the bottom or top levels of packaging (no cyclic dependencies)

Any of the above are further complicated when there are multiple donor modules or where multiple new modules are to be created in a single step.

In each case the fundamental usage is the same:

  1. Use Structure101 Studio to simulate the moving of classes into the target module
  2. Use Structure101 Workspace to apply these simulations and expose violating dependencies within the IDE
  3. Refactor away the violating dependencies
  4. Actually move the classes to the target module


A Demonstration of Extract Module

The simplest extraction is of code that doesn’t cause a cyclic dependency when it is moved to a new module. The module code has no dependencies on anything that will remain in the monolith. The monolith will depend on the new module and the dependency structure will be a directed acyclic graph. The module can be created and the code moved without any preparatory refactoring. This reduces the use case to three steps:

  1. Use Structure101 Studio to simulate the moving of classes into the target module
  2. Use Structure101 Workspace to apply these simulations within the IDE
  3. Actually move the classes to the target module

In the example below the core module is the monolith. The diagram is a layered dependency structure wherein a package is positioned above any packages on which it depends. Selecting the nalpeiron package shows its dependencies. Note that it does not have any dependencies on any of the other packages in core. It is dependent only on external libraries (which are not visible because the Structure101 Studio project has the Hide externals property set). We would expect that this package can be moved out of core into its own module without causing a cyclic dependency.



Structure101 Studio can prove the theory by creating a virtual module and moving the nalpeiron package and its contained code into it. The new module is created via the context menu Add | Module.



The nalpeiron package is then moved (using drag and drop) into the nalpeiron module which retains the nalpeiron package structure. (Dropping packages, classes or interfaces onto a module replicates the source packaging)
It is recommended that package names are not changed when the code is extracted to the new module. This avoids changes to the import statements in code that references the moved classes which minimises the size and impact of the change set.

As expected there are no feedback dependencies from the new nalpeiron module into core.



The changes made are captured in an Action List. The list is renamed to describe its purpose and shared so it can be published to the Structure101 Repository.



The Structure Spec is updated to add the new licensing module. Any module not present in the Structure Spec appears in red (it is considered a violation if it is not in the spec). Right click on the red module and select Add to spec from the context menu. The Structure Spec will also be published to the repository.

Now publish to a Structure101 repository. In the publish dialog, make sure Publish spec and Publish Action List are both checked.



In the IDE the Structure101 Workspace properties are updated to reference the published project. When the property dialog is closed the change triggers Structure101 Workspace to perform a full refresh during which the Structure Map and the Action List are applied.



Below is the Structure Map and Action List as seen in Eclipse. The Action List is an expanded form of the list created in Structure101 Studio. This makes it clear to developers what modules and packages are to be created and which classes are to be moved. The Workspace plugin applies these actions to the Structure Map so that the classes that are to be moved are found in their intended location. Whereas the package explorer view shows the classes in their current location.



Note that the Structure Map will show module to module dependencies including dependencies on the new nalpeiron module. Hovering over nalpeiron shows that core and build-tools are dependent on it. When the new nalpeiron module has been created and the classes moved both core and build-tools will require nalpeiron adding to their project dependencies.



The developer tasked with the module extract has all the information necessary to go ahead and create the new module. In the meantime all developers can see what the architecture is going to be. Furthermore, if they were to introduce a dependency from nalpeiron into core which would violate the specified layering, this will be highlighted in the Structure Map. The diagram below shows the effect of adding a new member of type S101 into the NALPServer class. The Structure Map shows a dotted line from NALPServer to S101 to highlight the violation.



After the module is created in the IDE and the classes have been moved the package explorer location shows the classes in their new module. The entries in the Action List are greyed out to indicate they can no longer be applied.



And back in Structure101 Studio a refresh of the project brings in the changes made in the IDE. The entries in the Action List are also greyed out indicating that they are obsolete.



In this example the Action List contained a single module extraction and it could now be deleted. When multiple extractions are present in the list, the obsolete actions of a module extract that has been implemented can be deleted with Remove obsolete actions.

A similar extraction exercise can be performed on the logging package which is at the lowest level in com.headway.



We can see that the util package depends on the logging package. Once logging has been extracted into its own module, util becomes the lowest level package within com.headway and can itself be extracted to a new module.

After util is extracted, then brands and foundation become candidates for module extractions.



Extracting a module from the lowest level of the dependency structure is a low risk introduction to the extract module use case. But it may not be applicable to your code-base, which means you will have to contend with dependency violations on your first module extraction. In our next post we discuss how to manage those violations.

Migrating from Monolith to Modular – Part 1

This is the first post in a series that will explore the challenges of migrating a monolithic code base to a modular architecture

Series links:
Post 1 – Migrating from Monolith to Modular (this post)
Post 2 – Monolith to Modular – The Extract Module Use Case
Post 3 – Monolith to Modular – Sizing and Estimating Scope
Post 4 – Monolith to Modular – Managing Violations


If you type ‘migrate monolith’ or ‘refactor monolith’ into a search bar the resulting pages of links have a significant bias towards migrating a monolith to microservices. Chris Richardson’s post Refactoring a Monolith into Microservices is a good example of the strategy based approach to migration that is suggested by a number of authors.  His strategy number 3 is Extract Services.

The third refactoring strategy is to turn existing modules within the monolith into standalone microservices. Each time you extract a module and turn it into a service, the monolith shrinks. Once you have converted enough modules, the monolith will cease to be a problem. Either it disappears entirely or it becomes small enough that it is just another service.

The problem with this is it assumes the modules within the monolith are obvious which is more often than not, not the case. The hardest problem for many if not most organisations migrating to micro services is identifying and extracting modules.

Typing ‘monolith java 9 module’ in the search bar returns numerous links to information about Java 9 modules. But in amongst the general knowledge you will find “Modules vs. Microservices” and “Modules or Microservices”. The first is a link to the O’Reilly site and the book Java 9 Modularity  by Sander Mak. The second is a link to his slide presentation in which he argues the case for Java 9 modules providing the modularity benefits inherent in microservices without the upheaval they inflict on organisation and process. He opens his presentations with a confession that he likes building monoliths then qualifies this to modular monoliths. He makes a compelling argument for the value of modular monolith over microservices. In any case, modular monolith seems to be a prerequisite of micro-services as Martin Fowler suggests in “Monolith First“.

Of course some organisations can only dream about Java 9 and microservices. Applications that are old enough to have reached their teens (or even early twenties!) are clearly core to the organisation and its business. Such applications are likely to have been under continuous development. The team will have cycled several times over the years contributing to varied styles of implementation. Whatever the architectural vision was at the start of the project it is unlikely to help today’s team understand the code. Failed architectural initiatives and incomplete technology migrations contribute to the mountain of technical debt that inevitably builds under the pressure for functional change. The code base becomes a tangle of inter-dependencies with little if any regard for interfaces and segregation. Ironically such applications probably have the most to gain from migration to a modular architecture. But where to start when “everything depends on everything else”?

Whatever the desired end state the challenge is the same; given a monolith of tangled code, how can it be transformed into a modular structure? The business will invariably prevent a Big Bang, such a transformation almost always needs to be carried out incrementally.

Which leads to a simple strategy:

Transform the monolith through incremental extraction of modules

Paraphrasing Chris Richardson:

“Each time you extract code into a module, the monolith shrinks. Once you have extracted enough modules, the monolith will cease to be a problem. Either it disappears entirely or it becomes small enough that it is just another module.”

Whether the end goal is Java 9 modules or microservices doesn’t matter. The module extraction strategy applies in either case. In fact they don’t need to be a part of the transformation. Either technology  can help to enforce strong encapsulation and well defined interfaces. But that can be achieved using dependency analysis tools like Structure101 Studio and Structure101 Workspace so avoiding the added complexity of a technology migration on top of the structural one.

The Devil is in the Dependencies

Sander Mak describes the 3 tenets of modularity as strong encapsulation, well defined interfaces and explicit dependencies. The key to successful extraction of a module is making explicit the inbound and outbound dependencies of the code being extracted. Only then can they be analysed, managed and refactored to comply with the intended architecture. The degree of tangling within the code being moved doesn’t matter. It is the module level dependencies that are the focus of the extraction. Indeed, moving code from one module to another can introduce tangles into an otherwise acyclic dependency structure.

Consider the simple package structure below that is organised into a layered structure diagram. The arrows show the dependencies between the packages rolled up from the contained classes. The packages and their dependencies form a directed acyclic graph (DAG). There are no tangles in this structure.




However, extracting the package codeforextraction to a separate module creates a tangle as shown below. The dotted line indicates a dependency cycle between the modules.


Cyclic Dependency


This module level cycle means that the code cannot be built by a framework such as Maven. It would fail on finding the cyclic dependency during parsing of the pom.xml files. The dependencies causing the cycle need to be removed before the code can be extracted to the new module. Doing so can be a complex and time consuming refactoring exercise.

This series of posts describe how Structure101 Studio and Structure101 Workspace can be used to: –

  • identify candidates for extraction
  • size and scope the refactoring effort
  • communicate the intended architecture to the wider team
  • monitor removal of violating dependencies (and guard against new ones)


The next post uses a simple example to illustrate the approach, follow on posts will delve into more common and harder problems.

Introducing Structure101 Workspace

Our new IDE-resident product Structure101 Workspace includes some of the visualization and specification concepts from Structure101 Studio, but in a simpler and entirely new combination that is specifically designed for programmers.

The visualization is similar to the Studio LSM – in Workspace we call it the Structure Map – but unlike the LSM, you can overlay a Structure Spec onto the visualization, letting you browse the as-built dependency details through the specified architecture. And of course being in the IDE you can navigate between visualization and code very easily.

“It’s like Google Maps for your code!”- Stephan Schulze, Pentasys

A key feature is spotlighting, which highlights a specific item, exposes all its (in- and out-bound) dependent items, and hides everything else, showing the resulting filtered details in the context of the overall architecture. The spotlight can be controlled manually, or automatically follow the editor. Chasing code-level dependencies across your architecture was never so easy.

By showing dependencies related to what you are working on right now, in the context of the architecture, the architecture genuinely guides developers as they code (kind of the point of architecture!).

Behind the visualization is a live model of the workspace as it is now, including the changes you just made. In fact changes are emphasized so you can see how they impact the structure, and be warned if you violate any layering or visibility constraints.

We believe that the whole team will benefit from an increased appreciation for good structure (no more tangles!), and a shared (checked!) understanding of the architecture. Not to mention newcomers who will no longer need to spend months reconstructing a mental model of the architecture from the code, messing it up as they go…

Improved structure from the deeper levels meets architectural specs from the top, and agility wins.



Workspace can be used as a stand-alone product, and comes with it’s own light-weight Structure Spec Editor. Or Workspace can be used with Studio 5 which has a more sophisticated Structure Spec Editor, and let’s you use the other Studio features (like restructuring simulation) in combination.

If you currently use Structure101 Studio 4 along with the lightweight IDE plugin, you will find Studio 5 plus Workspace a much more powerful combination. The upgrade considerations are outlined in this upgrade guide.

It’s also worth noting that the price of Workspace is a lot lower than the price of Studio – it’s an every developer product. Details here. Download trial here.

Questions and comments as always very welcome at

Java User Group Presentations on Bridging the Divide between Architecture and Code

Structure101 co-founder, Chris Chedgey, will be delivering his latest talk – ‘Bridging the Divide between Architecture and Code’ at a number of Java User Group events across Europe and North America in the coming months.

We’ll be adding new dates to this list as they’re confirmed, so keep an eye out for updates if you don’t see anything in your area right now.

If you’d like to suggest an event not already on the list for Chris to speak at, drop us a mail, and/or contact your local Java User Group.


Java Usergroup Brussels

29th May ‘18


Java User Group Frankfurt

30th May ‘18
German National Library

Java Usergroup Berlin-Brandenburg

19th March ‘18

Java User Group München

16th April ‘18

Java User Group Dortmund

17th April ‘18

Java User Group Hamburg

18th April ‘18

Java User Group Hamburg

19th April ‘18


Java User Group Switzerland

20th March ‘18

Java User Group Switzerland

21st March ‘18

St GallenJava User Group Switzerland

22nd March ‘18
St Gallen

North America

Boston Java Meetup Group

30th January ‘18
Cambridge, MA

Houston Java Users Group

31st January ‘18
PROS, 3100 Main St

Toronto Java Users Group

22nd February ‘18
Free Times Cafe, Toronto

New York
New York Java Special Interest Group

28th February ‘18
New York

San Francisco
The San Francisco Java User Group

7th March ‘18
Pivotal Labs, San Francisco


Chris’s bio

Chris Chedgey is co-founder, product designer, and developer at Structure101 – a team dedicated to creating techniques and technology for transforming and controlling the structure of large evolving code-bases.

During a career spanning 30 years, Chris also worked on large military and aerospace systems in Europe and Canada, including the International Space Station. He has spoken at many user groups and conferences including Oredev, JavaOne, JAX, Javaland, 33rd Degree, JFocus, and Devoxx.


Static diagrams on wikis and white-boards might capture the vision of architects, but they don’t much help programmers to understand how the code they’re working on right now fits into the architecture. Nor are the programmers warned when they violate the diagrams as they forge changes, line-by-line.

This is a huge problem – it is ultimately individual lines of code that make or break an architecture; and we know that a clean architecture will help teams develop a more flexible product, with less complexity, less wasted effort, etc. Worse, without practical architectural guidance, programmers wrestle with invisible structures that emerge from thousands of inter-dependent lines of code.

And being invisible, these structures become ever more complex, coupled, and tangled. In fact, uncontrolled structure actively fights against productive development.

This talk shows how to rein in emergent code-base structures and gradually transform them into a cogent, defined architecture. You will see how…

  • Visualizing the emergent structure makes a code-base easier to understand.
  • Restructuring to remove tangles and reduce coupling makes the visualized code-base easier to work on.
  • Specifying layering and dependency rules converts good structure into a controlled architecture that guides the team as the code-base evolves and grows.

A key ingredient is a live visualization, inside the IDE, of the detailed code the programmer is working on, in the context of the overall architecture. In short, you will learn how bridging the architect/programmer divide can convert code-base structure from liability into an asset that actively works for development productivity.

Why cycles explode complexity

Software developers and architects would instinctively avoid cyclic dependencies given the choice – we’d never consciously create an architecture which was a ball of mud. For instance we’d be more inclined to aim for something like this …

untangled example

rather than something like this (same components but with cyclic dependencies) …

tangled example

Why? Well the second system has about 2x the number of dependencies. But it seems more than twice as complex. I could create an acyclic model with the same number of dependencies as the second, and we would probably be happy enough with it. There’s something about the cycles themselves.

One thing is for sure – cycles make it much harder to tell the story of a codebase. For the first system I can quite easily explain how PNUnit uses NUnit, which does this, and uses Codeblast for that, and Colors for something else, and so forth. I can’t describe the second system in the same way. I can try to explain that PNUnit uses NUnit, which uses CP, which uses TestLibraries, which uses NUnit (again) and PNUnit (which as already mentioned uses NUnit) and Log4net which uses Testlibraries which uses Log4net (again) which … zzzz … Explainability is clearly related to the number of paths through a system’s dependency graph, and cycles lead to much more paths, and very convoluted explanations! And not just explanations – tangled dependencies clobber any hope of separate testing, reuse, release, and such. This is because tangles explode the overall connectedness or coupling of your codebase, which sends complexity through the roof.

This can be measured.

Cumulative Component Dependency (CCD)

In Large-Scale C++ Software Design, John Lakos talks about dependency (and thereby complexity) being cumulative. His Cumulative Component Dependency, or CCD, recognizes that when one item depends on a second, it really depends on all the items that the second item depends upon, and that they depend upon, repeat. For instance an item can be impacted when anything in its dependency closure changes. The CCD of a system is the sum of the dependencies of every item in the system. It is a very good indicator of the relative complexity of systems. Here’s how it works.

ccd simple

In this simple example (copied from John’s book) the items on the bottom row are considered to be dependent only on themselves; the items on the second row are each dependent on themselves plus 2 items on the bottom row; the top item is dependent on itself plus the 2 sets of 3; CCD is the sum of all the numbers, so for this system it is 17.

The overall number of items and dependencies impacts the CCD somewhat, but when dependencies form cycles, the effect on CCD is explosive.


For example the dependencies for this set of items all point downward (and are therefor acyclic), and its CCD is something less than 164 (it would be 164 if every item depended on every item on the next row down, which isn’t the case). By adding a single dependency from the item on the bottom layer to the item at the top, we create cycles.


The impact on the system coupling is dramatic – for example where “command” (left of 3rd row up) initially depended on just a few other items in the acyclic system (common and util directly, state indirectly), it now depends (directly or indirectly) on every other item in the system! In fact every item depends on every other, so the CCD has rocketed to over 1,200! In this example we know which is the disruptive dependency because we added it to the clean, acyclic structure and then highlighted it on the resulting diagram. But in reality cyclic dependencies can make a system an order of magnitude harder to understand and maintain, whatever way you measure it.

Why does the violation count change when I collapse a cell on a Structure101 Architecture Diagram?

This will happen when there is one or more violations contained within the scope of the cell you are collapsing. For example you might have this situation:


And the violations count is show as 3(4) in the Diagrams list:


(i.e. there are 3 class-to-class violations, and a total of 4 when you count the detailed method-to-method dependencies etc.)

Now when you collapse the “orm” cell thus:


The #violations changes to 2(3).

This is because Architecture diagrams are used for you to articulate the dependencies in the codebase that you care about. The way you express what you care about is to expose it in a diagram. When you collapse the cell, the set of rules you are defining changes.

Why did we do it this way? Well it is very important that you have some way to express what you don’t care about as well as what you do. Otherwise should the rules checking also look inside all those other cells? And how deep should it check – to the immediate child containers, or all the way down to the leaf containers?

Another consideration is that Architecture diagrams are intended to be shared across your team. It would be just confusing if violations were being reported for rules that I can’t see in the diagram I see in my IDE, or the count of violations reported Sonar.

This can seem odd if you have done a bunch of layering work within a parent cell, and just collapse it temporarily while you work on the internals of another cell. But don’t panic. The work is not lost – it is retained in the Structure101 Studio diagram, and that cell collapse will not take effect unless and until you “publish” the diagram to the repository from which it is shared with the team. Also, the expansion icon on the cell changes from a “*” to a “+” to remind you that you have previously expanded the cell and it may contain stuff you want to expose before you do share the diagram.

We could have done this differently, but I still think we have the right balance between the potential downstream confusion, while still letting you temporarily elide detail as you’re working. What do you think?

Structure101 combines multi-award-winning products for organising a codebase

Retrofitting software architecture

Structure101 Studio, combining the award-winning Structure101 and Restructure101 products, is now generally available.

Structure101 Studio makes it feasible to discover, define, communicate, and enforce an architecture for an existing codebase, without major upfront surgery. Structure101 achieves this by letting the software architect construct an external model of the architecture while simultaneously mapping the model right into the code.

Structure101 Studio then inserts the newly-defined architecture into the development workflow, supporting the deliberate evolution of the model with each iteration, always keeping code and model synchronised. The separation of model from the code means that teams gain the benefit of architecture-driven development immediately, while feeding model-code alignment tasks into the workflow at a pace that fits with project schedules.

Disorganized codebases are a huge problem in the software industry. Without meaningful higher-level abstractions to guide them, developers drown in an ever-expand sea of source files, and this exacts a heavy tax on every development activity. Tools can visualize existing physical structures, but these structures are generally accidental or arbitrary, so just understanding them is of limited benefit. Structure101 Studio lets you draw on the existing structures to develop new, meaningful architectures. Simply put, developers get more done when the codebase is organized.

Structure101 Studio is a major step forward in our vision of giving teams the tools they need to retrofit a modular architecture to any existing codebase with a fraction of the effort and risk of starting over.

Chris Chedgey, Founder Structure101


We have been using Structure101 and Restructure101 at Adesso since 2011, upgrading to Structure101 Studio was an easy decision. It unifies both tools and makes it much easier to not just analyze the architecture of a project but improve it on the spot. It has become a standard tool and we are educating every architect in its usage.

Eberhard Wolff, Head of Technology Advisory Board, Adesso AG


Pricing and Availability

Structure101 Studio is available for purchase online at US$995 per user from Personal licenses can be purchased for US$395. While licenses are available for free for use on open source projects and for academic purposes.

To download a fully functional, 30-day free trial visit

Learn more

To learn more about Structure101 Studio:

  • Visit the product section on to understand the process of organising a codebase and how it fits with your existing workflow.
  • Visit the resources section on for concise training videos and full product documentation.
About Structure101

Structure101 provides an agile architecture development environment (ADE) that lets the software development team organize a codebase into a modular hierarchy with low and controlled coupling.

According to the US Department of Defense, well structured software is delivered in half the time, at half the cost, and with 8x less bugs.

Structure101 supports C/C++, Java, .Net, ActionScript, InterSystems Cache Objects, Pascal, PHP, Python, SQL and UML.

Structure101 won the 2008 & 2011 Dr Dobb’s Jolt Productivity Award in the Architecture & Design category, and the 2012 JAX most innovative Java technology award.

Thousands of customers every day use Structure101 to manage the architecture of more than a billion lines of code, and have claimed Structure101 shaved months of calendar time and man years of effort off a single project.

Customers include Apache Software Foundation, BMW, Cantor Fitzgerald, Cisco, Credit Suisse, Ebay, Euro Bank, European Commission, Financial Times, GE, JBoss, Junit, Life Technologies, Netflix, RBS, Sony Mobile, Thoughtworks, United Healthcare, VMware, Workday, Wells Fargo, Zurich Financial.

Structure101 is a small, privately-held, distributed, bootstrapped and profitable company with staff located over 5 countries and 3 continents.

# # #

Media Contact

Paul Hickey
+33 9 74 76 07 41