The inaugural SOAFEE All-Hands kick-off session took place on the 27th January 2022, attended by around 300 people from across the automotive ecosystem. If you were unable to attend the meeting in person, or would like to review the content again, the recording of the session is available here.
Questions and Answers
A number of questions were raised during the session. Some were answered live, but others we didn’t have time to get to. All of the questions raised are answered by members of the SOAFEE Governing Body below.
Through SOAFEE, we are not aiming to safety certify the whole of hub.docker.com, or to guarantee that any containerized application will be certifiable. We aim to ensure that there are no fundamental blockers to an OCI compliant runtime being able to achieve safety certification. The ecosystem will then be able to deliver safety certified base layers for your certifiable code. The Dockerfile for your container workload that you wish to certify could end up looking something like this:
# Use a safety certified base layer for my application
FROM safe_os AS builder
# Install build dependencies
# Build workload
RUN make && make install
# Create a runtime environment
FROM safe_os AS runtime
# Copy built artefact into runtime
COPY --from=builder <...>
# Set the entrypoint and default arguments for the container
CMD ["<workload arg 1>", "<workload arg 2>"]
The aim for SOAFEE is that the output of this build process should be certifiable through a standard certification process.
SOAFEE is aiming for binary workload portability in order to maximize the investment made by workload developers in their applications. In order to achieve this, we need a HAL (Hardware Abstration Layer) of some form to ease portability from one hardware platform to another. A lot of great work has already been undertaken in other standards bodies like COVESA, where they have spent years working to understand what the HAL for a hypervisor based implementation should be. SOAFEE aims to adopt and accelerate this work through broad industry collaboration.
There are already moves in the industry to standardize on VirtIO from OASIS. One of the first tasks of the SOAFEE technical working groups will be to undertake a gap analysis of VirtIO both in terms of functionality, as for example VirtIO currently has no definition for AI/ML based applications, as well as look into the suitability of VirtIO for safety critical and real-time environments. A fundamental axiom for SOAFEE is that it will not introduce fragmentation between cloud and edge when it comes to standards adopted. The goal in this instance therefore would be to work upstream with OASIS to extend the VirtIO interfaces to encapsulate the additional requirements of the SOAFEE members, whilst also enabling the VirtIO implementations to achieve the safety and realtime needs of the market.
The reference implementation of the SOAFEE architecture is called EWAOL (Embedded Workload Application Orchestration Layer), and is available from the meta-ewaol git repository. This reference implementation is built using Yocto, with additional unofficial layers to support more target devices available from meta-ewaol-machine, although these are community run with no official backing from the SOAFEE members.
The vision for SOAFEE is that it is complementary with upstream rich stacks like AUTOSAR, AGL, Autoware, Android Automotive and others. SOAFEE is not looking to replace any of these stacks, but make it easier for these complex workloads to be deployed to current and next generation hardware platforms.
The initial scope for SOAFEE is to target rich application processors, but we would like to explore the concept of MCU’s and MCU based workloads under the scope of control of a standards based orchestrator. The orchestrator has the opportunity to be the single point of truth for configuration and deployment of workloads across the fabric of the system, and it could give us a sensible control point for managing the complex topography of functional platform.
There is a reference implementation of the SOAFEE concepts called meta-ewaol. This project was initially launched by Arm, and they are actively working on a collaboration agreement to enable the SOAFEE ecosystem to contribute to the implementation. But, this reference implementation is designed only to be functional, not necessarily a certified go-to-market solution. For this we will be working with our commercial software partners to create a safety certified and commercially supported equivalent of the EWAOL reference implementation. This is made possible because the base SOAFEE architecture is expressed as a set of upstream open standards that any commercial software entity are free to implement against.
No, there is no vendor lock-in to a specific cloud service provider. This is enabled through the exclusive use of open standards within the SOAFEE architecture, which is how we ensure our cloud independence. AWS are a founding member of the SOAFEE SIG, and as a result will be our initial implementation partner for true cloud based technologies. But, the SOAFEE SIG is open to all, and we welcome any other cloud service provider to join to help shape the future of cloud native technologies in safety and real-time environments.
Absolutely. AWS and Arm hosted a workshop of the process at RE:Invent, details of which can be found here.
From early discussions with Tier 1’s and OEM’s in the ecosystem, it became clear that not all vendors agree that a hypervisor is needed for their specific use-cases. However, the use of a hypervisor gives us an interesting control point for the required elements of FFI (Freedom From Interference), access control, and resource management, that when brought together can help with the low level architecture guaranteeing the runtime needs of the workloads as expressed by the orchestrator. This does not require that a commercial implementation of the SOAFEE architecture has to make use of an hypervisor, so long as it can achieve the runtime goals of the workloads by some other means.
The EWAOL reference implementation is currently integrating Xen as a reference Type-1 hypervisor. We will be making use of this platform to implement and validate architectural choices made by the technical working groups. We can also use it as a benchmark for understanding the pros and cons of using a hypervisor in a deployed system.
While we were not aware of this initiative, SOAFEE does not want to re-invent the wheel at any stage of its implementation. If an external ecosystem alliance is working on or has solved problems that are in the remit of the solution required by the SOAFEE architecture, we will consider adopting that solution in order to accelerate development against the goals of the SOAFEE SIG.
Security is hugely important to any domain today, and SOAFEE is no exception. We need to leverage standards and best practices in order to achieve our overall security goals. At this point in time, we are looking at adopting platform security through PSA and from a workload perspective explore the use of upstream standards like PARSEC. If gaps are identified through the use of these standards to meet the security requirements for the SOAFEE members, we will work upstream within the identified technologies to fix the gaps.
SOAFEE encourages adoption of cloud-native tooling to achieve the quality required for effective workload deployment. As such, it does not mandate any specific tool for CI/CD. We leverage the OCI and CNCF standards for workload packaging and deployment, which means that the ecosystem can choose the correct technology for their specific use-case.
As for the orchestration technology, in the reference EWAOL implementation we are making use of k3s. But, as the orchestrator is based around open standards, it means that you can replace it with something functionally equivalent if needed. The expectation is that for a go-to-market platform, an OEM would want to use a commercially supported orchestrator supplied by an ecosystem partner.
This will depend on how quickly the SOAFEE SIG solves the underlying issues with adopting the cloud-native technologies for the markets they are implementing for. The expectation is that we should be able to hit a start-of-production within the 2025 time frame.
Members of the SOAFEE SIG are aware of issues like this with containerization as it exists today, but there are discussions ongoing within the OCI ecosystem on how this can be resolved effectively. We don’t have the answers today, but through the SOAFEE working groups, we will work collaboratively on the non-differentiating problems with the software stack in order to resolve issues such as these.
We see SOAFEE as being a low level system enabler for rich stacks like Arene, VW.OS and others to operate on.
TBD. This is why the working groups are being formed – to enumerate and find solutions to these sorts of problems. Our initial group cannot know all the possible questions nor all the answers so please join in and help uncover the answers.
The initial intent is to explore the use of industry standard VirtIO to enable cloud and edge parity for devices like GPUs and so on. There has already been a lot of research and investigation into these technologies though organizations such as AGL with this webinar from Panasonic on Device Virtualization Architecture in Automotive. The goal of SOAFEE is to build on this work and accelerate the standardization around technologies like VirtIO to help being binary workload parity between cloud and edge devices.
As the SOAFEE SIG moves towards its execution phase, we are working to bring on more partners to collaborate within the Technical Steering Committees and workgroups. With this Public All Hands meeting and the TSC kick-off meeting in February we are sharing our plans for SOAFEE publicly. All governing body members of the SOAFEE SIG have a responsibility to work towards onboarding new members, and we are actively working to that goal. More news on new members will be announced over time.
We are already doing some investigations in this domain. Through the Linaro Stratos project, we are looking at implementing a generic VirtIO backend using Rust to explore the idea of exposing bare-metal drivers to a rich OS. But on the whole, SOAFEE does not mandate the use of any particular language to implement your workloads because we hide the implementation details and runtime dependencies in a container image.
The reference implementation will be making use of Zephyr in the short term to build out realtime and safety functions. There is an open question on the use of containers in an RTOS, but some of our ecosystem partners have products that are working towards creating functional realtime container solutions. Over time, we will be working to integrate these solutions into the SOAFEE reference implementation.
The current plan is to have the TSC All Hands as a two half-day virtual event. More details will be posted to the SOAFEE website soon.
The primary SOAFEE goal is to enable ease of software portability with the ultimate expression of this is to have 100% binary portability between SoCs. The SOAFEE architecture will aim for that portability goal whilst attempting to minimize the performance impact of portability. However, there are tooling options in the cloud that can help with the optimization of workloads when the target hardware is known. For example AWS SageMaker Neo performance tunes ML workloads in the cloud enabling us to extract the best performance whilst maintaining portability. The expectation is that the tooling ecosystem around SOAFEE can help to solve some of these complex tuning problems whilst maintaining portability.
The structure of the TSC and working groups is currently being agreed upon by the Governing Body. This will be presented at the TSC kick-off meeting that will take place late February 2022. More details of this event will be posted when available.
The SOAFEE governing board currently contains members that deal with these sorts of issues directly, and we encourage more to help out. So, yes, they are part of the fabric of being successful in the automotive industry and will be considered as part of the working group discussions.
The SOAFEE charter will be published soon which will outline this question in more detail, but to answer quickly here, there is currently no membership fee for joining any part of the SOAFEE SIG. The TSC and working groups are run as a meritocracy initially seeded by members of the governing body. And if you would like to have voting rights at the TSC or Working Group level, we ask you to agree to the terms of engagement. We are currently investigating a lightweight ‘click-through’ process to enable this.
By adopting cloud native software design methodologies, we bring into scope the DevOps workflows and best practices. This brings with it a rich tooling ecosystem that helps with CI/CD of workloads in the modern infrastructure domain today. Through SOAFEE, we are aiming to expand the language used by modern orchestrators to express realtime, safety, functional and non-functional aspects of workloads so that they can be deployed correctly on the edge device, but also be tested properly as a part of the DevOps cycle.
This is a known gap in the cloud native world today, but though SOAFEE there is a desire to fix the gaps and make the process suitable for Automotive workload deployments.
The vision for SOAFEE is that it is complementary with upstream rich stacks like AUTOSAR and others. Hence the SOAFEE SIG would like to get in touch with initiatives working in adjacent areas. We welcome the exchange with AUTOSAR and will contact you directly.