Demystifying the 3DEXPERIENCE Customization Model

Demystifying the 3DEXPERIENCE Customization Model

In the comments of my previous article "Demystifying 3DEXPERIENCE platform," I was asked to demystify the opaque 3DEXPERIENCE security model. Well, it is a heady request, but I will attempt to do so here. As usual in my articles, I will provide the historical context and then the as-is and project into the to-be. Let's get started!

“The past is never dead. It's not even past.” William Faulker

That is one of my favorite quotes (I used it to open my novel The Gramble Chronicles I: Sophie's Playlist), and it is a good one to understand the complex world of customization or configuration. If you haven't done so, you may wish to skim my article "Demystifying Digital Dilemmas" where I outlined the history of PLM platforms including the origins of of the 3DEXPERIENCE platform. As I wrote there, the platform is the result of a merge of VPM V6 and MatrixOne. These platforms were already quite different:

  • VPM V6 had a C++ core and thus customization was oriented around making modifications to attributes or masks (security filters) and compiling them back into the core for use in a rich client CATIA session. It was powerful, but very cumbersome. There was a lighter manner for modifying business processes ("Business Logics" on the KnowledgeWare workbench if memory serves) inside CATIA. There was little one could modify in the user interface apart from the menu items. There was also a much larger library called Component Application Architecture (CAA) for writing custom applications for, say, FEA analysis or whatever, that could be accessed in CATIA and executed in session. In essence, one could extend the data model in a few, limited ways and hide functionality, but it almost always required a recompile and restart of the application server in many cases. There was a small Java web server with a heavy C++ kernel that was loaded into the Java process at startup. The C++ data model was relatively narrow (only a handful of base classes) but fairly deep with one layer of derivation after another. The base classes were called Instance Reference Port Connection (IRPC) which is still the name of the data model you may sometimes encounter in the Dassault Systèmes documentation. There was no web-based user interface.
  • For modifying the business rules, security, and attributes in MatrixOne, it had an intermediate language between the Matrix Kernel and database's Structured Query Language (SQL) called Matrix Query Language (or MQL). MQL is a very powerful language, but with a quirky syntax and, more importantly, no source code control and no rollback. Once a change was made via MQL, it was permanent. That being said, the results of an MQL update are immediate and system-wide. For modifying the user interface, one could write Java Server Pages (JSPs) for web interface components and/or modify .properties files for, say, localization of the server into different languages. For more powerful macro-like programming, there were also Java Program Objects (JPOs) which were essentially Java programs stored in the database and compiled on demand for execution in the kernel without requiring a standalone Java Virtual Machine. Therefore, customization was either made as a superuser with the powerful MQL command line (also fronted by the now-deprecated System and Business tools with a Galaxy-based user interface). Web coders would write JSPs with hooks into the JPOs stored in the database and deploy them into the web server and then restart the Tomcat instance to compile the pages. There was a Java front-end for the JSP and JPO handling and a C/C++ backend for creating and servicing the MQL requests. The data model was fairly broad (many, many base classes for each of the >1000 Matrix objects and attributes) and relatively shallow (very little subclassing out of the box) - basically everything on the platform was described as being either a Type (an object) or a Relationship between Types or an Attribute (with the word "type" being roughly translated into French as "entité", you will sometimes find this data model referred to in Dassault Systemes documentation as "E/R", or Entité/Rélation).
  • In the original ENOVIA V6 releases, the VPM V6 infrastructure was superposed over the MatrixOne foundation. Due to the antagonistic relationship between the product-structure oriented (and narrow-deep) data model of VPM V6 (designed, you will recall, for manipulating CATIA data in session) and the more business process-oriented MatrixOne (broad-shallow) data model, you basically got a two-for-one. In other words, the methods of customization that I described for VPM V6 and for MatrixOne were left more or less unchanged. Either one customized for the CATIA (and DELMIA to be perfectly transparent) rich client using C++ and other compile-able tools), or one customized the web client via JPOs and JSPs. MQL became a bit more complex because it was required for many system-wide operations (File Collaboration Server (FCS) management for instance) and for traditional MatrixOne data model operations and yet was quite difficult to manage for VPM V6 data objects. For example, the data objects in MatrixOne were simply described as "Part" or "Document" whereas those of VPM V6 were described with "base class/derived class/derived class/etc" so it was nearly impossible to manipulate VPM V6 objects using MQL for mere non-DS R&D mortals. This presented lots of challenges and may explain some of the headaches you had in the past.
  • Up to the end of the V6 era, access to external systems was written by hand and primarily rode upon a Simple Object Access Protocol (SOAP)-based XML file exchange framework. This was fairly flexible, but required extensive programming and used relatively old protocols. Later V6 releases introduced the XPDM event bus, but this also required quite a lot of programming and was rather limited in the scope of operations that could be performed. Some companies such as TechniaTranscat, T-Systems, CIDEON, CENIT, Geometric (now HCL), and ProSTEP wrote adapters and frameworks that were separately licensed for the platform for accessing non-DS (thus the "X" in "XPDM" meaning "no DS") PDM and CAD platforms.

PLM Express / TEAM to the Rescue! As well as RACE, OneClick Deployment Experience, and Baseline...

Dassault Systèmes quickly recognized this issue and rushed to create an SMB-friendly packaging model to address it called PLM Express. Bear with me for a minute, but we need to also remember that Dassault had also acquired SmarTeam for managing CATIA V5 and SolidWorks data and then outsourced it to artizone in Israel in 2009. The name SmarTeam had a fairly good reputation and still showed up occasionally in ENOVIA products. One of the first places it showed up was in the simplified data model (TEAM) that was introduced for PLM Express. The concepts (which I will attempt to demystify below) are essentially the same in the various incarnations since 2010x - RACE (~V6R2012x), OneClick (~3DEXPERIENCE 2014x) and now Baseline (3DEXPERIENCE 17x and forward) - just with more "configuration" and less "customization" and better web-based tools and, importantly, cloud-capable tools for customization/configuration.

When thinking about customization of V6, one had to separate the concepts around access to data (People, Roles, and Organizations (P&O) and Security), those of data model modification, those of modifying business rules, and those of modifying the user interface. Some of these stayed the same as I described previously - a necessity for customers that fully customized previous releases and had no path forward to switch data models without losing information - and some have evolved in this transformation that started with PLM Express / TEAM. Next in this article, I need to explain the basics of the access model and then what Baseline means in this context.

Access Model: We the People

There are four essential concepts when it comes to modeling the physical human hierarchies outside the platform in 3DEXPERIENCE:

  • People - these are users that have a unique ID, password (and as of 3DEXPERIENCE R2014x a 3DPassort), and role, belong to an organization, and have product licenses assigned to them. Objects created by users on the system belong to a primary user (however, they can have "secondary" owners as well).
  • Role - in the context of the access model, this is one of a specified list of platform roles that the user can have in session. Currently, since RACE (and thus in OneClick and Baseline), these roles are Reader, Contributor, Author or Leader. Each of these has some somewhat obvious rights on data and every user MUST have at least one role. Now, be careful! The roles referred to in the marketing for the Industry-specific offers ("Mechanical Designer" or "Product Architect") are purely packaging oriented and have NOTHING to do with the access roles I just mentioned. And, for those old MatrixOne hangers on ("Hi guys!"), these also are different and incompatible with the roles inside the old Matrix centrals used for MQL access rules - these are for the most part deprecated and obsolete. The access rights on data increase are cumulative as one goes from Reader to Contributor to Author to Leader.
  • Organization - this is a representation of a group of people into a department, project team or company. There can be multiple organizations in the system (OEM, Supplier, etc) and every user MUST belong to an organization. Organizations can be hierarchically organized to reflect real business structures.
  • Collaborative Space - when users of the 3DEXPERIENCE platform work together on a common set of geometric files or on a project, they are given a Collaborative Space with which objects modified in session are tagged. This was previously called "Project" in the VPM V6 data model - the name was changed to Collaborative Space in 3DEXPERIENCE R2014x. This is also why the 3DEXPERIENCE User Interface component representing the old ENOVIA V6 architecture was named "3DSpace". Use caution however because the name "Space" here is a bit deceiving. It does NOT refer to physical storage but should rather be regarded as a unique tag on objects (they can only belong to one primary Collaborative Space) themselves. (Nota bene: Physical storage is a concept completely detached from the data model at his point and managed by the File Collaboration Server (FCS). For objects that have data storage, one of their attributes will be a "Store location" - THIS is the physical storage for the object.)
  • Security Context - once we know who is working in what role and for which organization, we want to isolate his/her work on a specific task or project or more literally a Collaborative Space. This working context is called a Security Context, consisting of a unique combination of Person+Role+Organization which is used with fancy vector math to calculate access to objects in the database. Each object is tagged with an Ownership Vector (Person, Organization, Collaborative Space). When a user requests access to an object, s/he must have access to the Collaborative Space the object is "stored" in and his/her role must allow access to the object in its current lifecycle state. A mouthful to say describe that crazy vector math I mentioned earlier.
  • Note that there actually is an ISO standard for managing access which (ISO 17799) which inspired much of the work in this area.
  • Two other related concepts are those of Rules and Policies. These were implemented differently on VPM V6 and MatrixOne in order to grant or deny access to specific objects by specific users when the default rules did not apply. Efforts to merge the two models were another critical advance of the RACE/OneClick/Baseline initiatives because rather than the old method of heavy client tools (VPM V6) or cumbersome and potentially performance-killing MQL expressions (MatrixOne), most of the most common operations could be done with a simple web-based user interface in an indexable (read faster to resolve at runtime) manner that was called Indexable Security Keywords that one now sees on R2017x in the 3DSpace Control Center widget along with other goodies like Lifecycle configuration and Naming conventions among others.

Data Model: Unified Typing for All

The next piece of the puzzle is the data model itself. I have already mentioned that there were two competing models (IRPC for VPM V6 and E/R for MatrixOne) that caused some headaches with which some of my readers are already familiar. In 3DEXPERIENCE R2015x, Dassault Systèmes released a simplification called Unified Typing. The idea was to avoid data loss and allow enable the relationships of these objects with each other as well as the right of each to modify the other (Nota bene: this is specific to Types, there is not as yet a "Unified Relationship"). There is a manual process for converting VPM V6 data in releases prior to R2015x to the new model that is fairly straightforward. The tools that are compatible with "OneClick" and "Baseline" can now transparently add attributes to and extend the data model of any object in the system with the same tools and behaviors. A BIG improvement indeed.

(On Your Best) Behavior

The area of behavior on the platform has remained somewhat unchanged through R2017x (and I would predict very few changes to this in R2018x and beyond). Most behavior is written, as I mentioned above, using JPOs stored in the database which are then called from other JPOs, from JSPs, or from custom code. Some simpler behaviors are modifiable via the 3DSpace Control Center.

Since the origins of V6, the concepts of Workflow and Lifecycle were nearly synonymous (whereas they were relatively distinct on other PLM platforms). Individual objects move through various lifecycle (or in CATIAese, Maturity) states with signatures, changing access rules, and the like. To connect objects together in an extended process, one can use Routes in a point-to-point manner. There was an obsolete Workflow tool inside of MatrixOne, but it was definitively removed around V6R2012x. Routes and Lifecycle remain essentially unchanged other than new web-based interfaces to manage them. As for workflow, several rumors have circulated - the integration of an OpenSource tool into 3DEXPERIENCE or recycling 3DOrchestrate (formerly known as FIPER) from the SIMULIA world - but neither of these has been confirmed and I would not expect any changes here in 3DEXPERIENCE R2018x either.

User Interface: On the Road to Web 3.0

Another major transformation in 3DEXPERIENCE was the widgetized user interface. If you recall, up to 3DEXPERIENCE R2014x, each application had a unique interface: rich clients for CATIA, DELMIA, and SIMULIA and web clients leveraging JSPs for web clients. The sea change in the 3DCompass User Interface (see my Demystifying 3DEXPERIENCE article) was the unification of all the user interfaces (UI) to a common paradigm using common UI components (the Me, Share, and other buttons on the menu bar the upper right side of the screen), a common blue-grey color scheme, and a common layout (left bar for 3DCompass and apps, middle canvas for data display) and the aforementioned menu bar across the top regardless of whether the apps are rich, graphical clients or web-based browser clients. Customizing the CATIA and DELMIA interfaces can still be done using the CAA2 (separately licensed from the platform) development environment to some degree. For the web-based clients however, the paradigm has completely changed to a Web 3.0 architecture leveraging HTML5/CSS3 features and functions. The primary advantages of HTML5 over JSPs are performance (no compilation required and widgets can be independently updated with no dependency on each other) and flexibility (widgets can leverage CSS3 style sheets for exceptionally attractive interfaces. While 3DDashboard is entirely based on widgets, for ENOVIA apps, there are still some JSP pages, but they are embedded in widgets and being gradually phased out. Widgets and Apps are all accessed via the 3DCompass on the upper left side of the screen.

Another key change is the implementation of Representational State Transfer (REST) web services. All the widgets we just discussed all leverage this framework to get access to data on the platform. Stated as simply as possible, whereas SOAP interfaces mostly require coding for each invocation (tight binding to the platform) and allow for little security (ie, no 3DPassport integration), REST interfaces are simply based on accessing a specific Universal Record Locator (URL) and performing a "GET" or "PUSH" operation (there are a few others) for exchanging data and they can be extended to be authenticated via 3DPassport. REST interfaces therefore are more flexible because they are loosely coupled to 3DEXPERIENCE platform and because they are more secure than SOAP interfaces. SOAP is thus being slowly deprecated and replaced with REST. Please note that the XPDM infrastructure still exists as the strategic solution for integration with other PDM systems and it is being simplified and improved.

Fino's Crystal Ball

So, what does the future look like for customization? Here is my take:

  • Access Control: expect the Baseline to continue to expand to cover further attributes and objects and for these capabilities to be simultaneously available on cloud and on premises. (Incidentally, do my readers want a Demystifying DS Cloud article?). The old compiled frameworks will continue to exist to support heavily customized clients, but EXTREME caution should be used if taking this Customer-Specific Environment (CSE) path as it will absolutely become a dead end at sometime in the future.
  • Data Model: Expect it to be easier to migrate to Unified Typing in the future and more of the data model and typing operations to be accessible through the web user interface. MQL will still exist, but should be avoided where possible.
  • Behaviors: My crystal ball has a blind spot when it comes to Workflow, so don't hold your breath for a solution there in the immediate future. Rather, take advantage of Routes and the 3DSpace Control Center for controlling object behavior.
  • User Interface: I would suggest taking some web-based classes on HTML5 and CSS3 or reading some books (I would highly recommend Beginning HTML5 and CSS3 from APress) as HTML 5 is here to stay in 3DEXPERIENCE. For REST web services, I highly recommend REST in Practice: Hypermedia and Systems Architecture by Jim Webber, Savas Parastatidis, Ian Robinson as best in class. With these concepts in hand, you will become a 3DEXPERIENCE platform widget wizard in no time.

Conclusion

Well, that was a bit of a whirlwind tour through my view of how to extend and modify the 3DEXPERIENCE platform to fit your needs and business processes. I may have missed a thing or two (feel free to remind me in the comments), but I hope that this overview will be helpful for you.

As an added bonus, I found this great presentation by Ron Stenger of Razorleaf that was delivered at COE 2017 that does a more example-based overview of many of the topics I covered.

 

Lastly, the obligatory pitch: I worked with (2008-2010) and for Dassault Systèmes (2010-2017) and wrote most of the existing training material touching on all these topics. I created Finocchiaro Consulting, LLC to help customers navigate these waters, so do not hesitate to reach out and contact me at This email address is being protected from spambots. You need JavaScript enabled to view it..

Article on LinkedIn : https://www.linkedin.com/pulse/demystifying-3dexperience-customization-model-michael-finocchiaro/

Demystifying SaaS-based PLM :
Cloud and PLM in the Era of COVID

Cloud has been a hot topic in Product Lifecycle Management (PLM) for quite some time. With the onset of the COVID-19 pandemic, this movement will accelerate as companies move forward their digital initiatives to minimize physical contact while increasing collaboration. This book will talk briefly about the history of PLM and cloud computing and attempt to dispel some myths about cloud that are commonly circulating. It will talk about today’s SaaS PLM platforms. I will then give some pros and cons of leveraging cloud technologies before giving you a list of self-assessment questions about the risks involved in moving your PLM to the cloud.

 Cloud and PLM in the Era of COVID

Please consider supporting our efforts.

Amount