We are often asked about the energy efficiency of the iov42 core platform. This usually takes the form of comparing us with using a single central database or against other blockchain or Distributed Ledger Technology (DLT) platforms. In this post we address both of these comparisons.
Centralisation vs decentralisation
Comparing our platform to a centralised database is a bit like comparing a car to a lorry. They can both get you from A to B but they perform different tasks in doing so. The lorry will be slower and require a lot more fuel but it is carrying a load to the destination. The car will get there quicker and use less fuel but it may not be able to carry everything required. They both meet different requirements.
A centralised database acts as a store of data in a central place. It is a repository of data. Some databases may support storing the data in multiple places for performance or availability reasons however its main focus is on the persistence and retrieval of data.
Our platform performs a very different role; the storage of data is a small part of its operation. It performs computation to check the validity of each operation and it must be agreed by the relevant parties involved before any data can be updated and stored. Reaching consensus and the storage of the results in multiple places requires more work and hence more energy than a single central database. However it creates trust in the data that is stored and the operations performed.
The addition of trust does not come for free but we are working to reduce that cost for this added value.
Existing blockchain solutions vs iov42
In terms of blockchain or DLTs, they have quite rightly gained a very poor reputation in terms of energy efficiency. The original “proof of work” approaches relied by design on huge amounts of meaningless ‘work’ as a way to prevent tampering with the data stored. Since then other algorithms have been developed that have started to address this. For example, Ethereum has moved to a “proof of stake” approach and there are numerous other more sustainable algorithms now available.
The approach we developed is similar to the “proof of authority” concept. The parties that operate the nodes in a zone are known to each other; which means that there is a level of trust established amongst them, very likely legally binding. This means that only a small number of nodes are needed to provide trust across a zone and hence this reduces the amount of computation, storage and bandwidth required compared to those using proof of work algorithms.
The design of our platform enables us to address the energy requirements in a number of other ways as well.
Each use case that we support is different and they require different profiles for performance. An advantage of the iov42 core platform is that it can be deployed in ways that support these differing load profiles. For use cases that require small numbers of transactions and small loads, a zone can be deployed with minimal hardware requirements. In doing so it reduces the environmental impact to only that which is necessary.
For larger loads the deployment of the platform can be scaled up to support this. Scaling out the number of components in each zone allows much greater performance. However it comes with the overhead of increased energy usage.
A common way to measure the impact of this is to determine the amount of energy used per transaction. The higher the number of transactions that can be performed on the same hardware, the lower the energy usage per transaction. As performance increases we also have the opportunity to further reduce the hardware required to support a required use case or performance profile which in turn also improves the energy consumption.
The ability to run multiple or more use cases on the same zone means a zone deployment can be optimally configured to utilise the resources available and different usage patterns can be offset to reduce the idle time of the zone. This means a potential reduction in the number of required zones and hence reduces the overall energy footprint.
We are always working on improving the performance of the code we write. The recent introduction of an ‘aggregator’ component to our consensus protocol has significantly improved the performance. This reduced the zone wide network traffic and significantly reduced the number of cryptographic signing operations involved in consensus — an expensive computational operation. This has decreased network traffic, compute requirements and also the size of cryptographic ‘proofs’ which in turn has lowered the storage requirements.
We lean heavily on the use of a JVM (Java Virtual Machine) based language (Scala) and the benefits this brings. The JVM is widely supported on a large range of hardware targets. We plan to experiment with hardware such as ARM based deployment targets. These tend to have significantly less energy requirements than existing targets. The use of this also allows us to migrate relatively easily to newer, more energy efficient, hardware as it becomes available.
The use of Kubernetes and ‘containers’ means we can deploy and execute on all of the major cloud providers as deployment targets. All major cloud providers have either achieved carbon neutrality or are working towards it in the very near future. Their technologies are constantly evolving with a significant focus on the use of renewable energy to power data centres. Our node operators can lean on these advances to reduce their environmental impact.
The innovative flexibility of the core platform with respect to deployment and scalability means that it can be tuned to balance both the performance and scalability requirements of particular use cases and provide customers with a clear path to achieving sustainability.