Skip to main content
AMA

AMA Nillion Network

By November 26, 2022May 10th, 2023No Comments
vVv AMA | Nillion

Nillion Network: The Gestalt of a Decentralized Secure Processing Network

November 24, 2022

Nillion is a fully decentralized trustless network with a protocol based on a novel cryptographic primitive called Nil Message Compute (NMC) that will revolutionize blockchain, consumer, business, and institutional practices of data, security, privacy, and interoperability in Web2 and Web3. NMC is based on quantum-resistant Information Theoretic Security (ITS) technology and is the combination of Shamir’s Secret Sharing (SSS) and One-Time Masking (OTM). This mathematical innovation solves the performance constraints and scalability issues of traditional Secure Multi-Party Computation (SMPC) cryptographic protocols, rendering them obsolete and positioning Nillion as the mass-adopted, commercially viable default layer for secure processing.

Tristan Litré, Nillion’s Director of Crypto, and Miguel de Vega, Nillion’s Chief Scientific Officer, joined us for an AMA on November 1st.

vVv: Please give us a brief self-introduction on your position and responsibilities at Nillion.

Tristan: I’m Tristan, Director of Crypto at Nillion. I have an engineering background but have been in crypto since 2012. I traded through 2017 and left my job to be full-time in crypto at the beginning of DeFi summer. My first on-chain experience was participation in numerous DAOs. A little over a year ago, I met our Chief Strategy Officer, Andrew Masanto, and he introduced me to Miguel, the team, and what Nillion was building; it sounded too good to be true, but it wasn’t because here I am. My team and I work on the token economy, modeling and simulating the flow of value within the ecosystem. We are also designing governance mechanisms and what we call the policing policy. There are many things adjacent to what people know in blockchains, such as backing and slashing. Still, we will implement other mechanisms to ensure that the network maintains the highest security standards.


Miguel: I’m Nillion’s Chief Scientific Officer, and I oversee the platform and product creations, from the development and proper implementation of the math to their interfaces with our products. My technical background is in engineering, with 7 years of engineering studies and working experience with some of the largest IT companies in 2000, like Siemens and Nokia. My primary focus is on mathematical problems and their intersection with technology. I continued with a Ph.D. in mathematics and modeled the back end of the internet for 3 years. I authored 27 patents; it was highly intensive. Next, I shifted to computer science and machine learning. In 2013, I met Rob Leslie and was exposed to cryptography and zero-knowledge proofs. I spent many years on this in connection with authentication and identity verification. However, at some point, we required more than verifications, we needed to perform computations, and that’s when we started implementing Secure Multi-party Computation (SMPC). I initially met Andrew Masanto in 2017 related to a project with zero-knowledge proofs. We can all agree that he’s been transformational in our lives. In 2021, we came up with Nil Message Compute (NMC) and a method to circumvent the heavy communication that plagued SMPC. We presented this to Andrew and his team, leading to the birth of Nillion.

vVv:  What makes Andrew Masanto such a good leader? Why did you join him and believe in Nillion’s vision?

Miguel: Essentially, Andrew Masanto is a unique individual. Not only has he built businesses around Facebook and building communities, but he’s also co-founded a few unicorns in the crypto space, including Hedera Hashgraph. He has tremendous expertise in community building and a sense of timing the markets. For instance, in 2017, when I reached out to him with Rob Leslie regarding a faster way to perform privacy and implementation of a privacy coin like Zcash or Monero, he looked at it. He said, “Alright, the technology makes sense, but it’s not the right moment. It’s not the right time; the market is not there”, and that changed when we pitched NMC. We fully trust his instincts as they have proven correct more than once, and we’re delighted to have met him. In addition, he’s well-connected and has brought in great, great names. We have, for instance, Conrad Whelan, employee#2 and founding engineer at Uber. We have Slava Rubin, founder of Indiegogo. Mark McDermott, head of crypto at Nike. And then many crypto experts like Andrei Lapets, a Board of Directors of MPC Alliance, and Stanislaw Jarecki renowned cryptographer from UCI, makes the magic happen.

vVv: Let’s jump into what Nillion does and the potential implications for blockchain and humanity. What are the primary problems Nillion solves, and what are the subsequent use cases?

Tristan: The primary problems Nillion solves are around security. Fundamentally, there are holes in how Web3 and the general public on the internet approach security. The next wave of privacy-enhancing technologies will open up what people can do with data and how they can store data more securely. This is because the current go-to operating model for securely storing things is not secure. A good metaphor is if you need to use secured data, you first unlock it, then hide it underneath your hand while you look at it like a set of poker cards. After use, you return it and lock it back in a box. With these privacy-enhancing technologies, NMC allows you to store data in an Information-Theoretic Secure (ITS) manner, which is more secure than standard computational encryption based on the difficulty of a mathematical problem. In metaphor, you can use that data while it’s still hidden, and you never have to take it out of that box. The problem that NMC solves is unlocking all of this sensitive data in any field with hidden secrets that people can’t use because of redlining, regulation, etc., and rendering it mathematically impermeable to exploitation. However, you can perform things that will be helpful to humanity and your business with that data because it’s still maintaining its security. That’s the primary use case that we’re looking to tackle.

The problem that NMC solves is unlocking all of this sensitive data in any field with hidden secrets that people can't use because of redlining, regulation, etc., and rendering it mathematically impermeable to exploitation.

vVv: Miguel, NMC is your brainchild? Was this outstanding solution always intended to be used with Blockchain technology, or if other use cases were the primary focus?

Miguel: No, it was initially developed to solve the problem of collaboration between large companies. Originally, we were planning on using it in the context of anti-money laundering with banks. For example, when banks send money from one bank to another bank, they are bound by different regulations like privacy regulations, or GDPR, which prevents them from exchanging information about the sender and the recipient. Although this is good in terms of privacy, it is terrible in terms of catching the bad guys and illicit money flows because then it means that the banks have only partial knowledge of what’s going on in the transaction. The advantage of NMC being ITS means it could present itself as a solution to that problem because the banks are not sending any information. Still, they can engage in joint computation to assess the risk of a transaction. Initially, the idea was B2B which is utterly different from the decentralized world. But when we met with Andrew Masanto, Alex Page, and Andrew Yeoh, they opened our eyes to the possibilities of decentralization and what this could become, which is a public utility. It’s a much broader vision, much more ambitious and powerful.

vVv: What was the major innovation or insight that allowed your team to take SMPC (Secure Multi-Party Computation) and transform it into a Nil Message Compute (NMC)? Can you explain the transformation and development process?

Miguel: Yes, it’s an example of divide and conquer. Usually, with SMPC, you’re using one cryptographic primitive, and you’re tasking this primitive with many things. You want it to be correct so that the nodes come up with an accurate result; in the end, you also want it to be secure. ITS is secure and does not rely on cryptographic assumptions. And lastly, you want speed. Typically, Linear Secret Sharing is the primitive many SMPC protocols use, and it does all three things. Still, it lacks speed because it requires a lot of communication for computational processing. Thus, it’s a compromise on speed in favor of correctness and security. What we did is combine two different primitives; one is linear secret sharing which is Shamir’s Secret Sharing (SSS), with a new primitive that has homomorphic encryption properties called One-Time Masking (OTM). OTM performs multiplications or products of secrets very efficiently. It is the combination of these two primitives that generated a different trade-off between these three aims. We took correctness and security from ITS and the ability to work in a decentralized environment from SSS because it does have error correction and several properties that are very interesting to build a decentralized network. Then we took the correctness and the speed from OTM. The final protocol is NMC.

What we did is combine two different primitives; one is linear secret sharing which is Shamir’s Secret Sharing (SSS), with a new primitive that has homomorphic encryption properties called One-Time Masking (OTM). It is the combination of these two primitives that generated a different trade-off between these three aims [correctness, security, and speed].

vVv: There’s a common notion that centralized computation, cloud computing, or similar technologies will be necessary to support the metaverse and Web3 in the future. How does Nillion’s technology fill that role and allow Web3 to scale in a safe and decentralized fashion?

Tristan: That notion is prevalent because of the current state. Whatever integration of the metaverse we have today –  granted that the word metaverse has a lot of different meanings to a lot of different people –  if we limit this to blockchain gaming, for example, most of it relies heavily on server-side computation on centralized standard archaic servers. The primary reason for this is that blockchains were just never really made for computing. It was recently the birthday of Bitcoin’s white paper. Bitcoin was a solution for double-spending, a ledger solution to be able to send payments around. And then, Ethereum came along and appended this compute layer onto the blockchain. It was genius and opened many fascinating use cases and composability that exists today on-chain. But the throughput was limited and constrained by the original design of the blockchain. Everything is replicated on many machines, and while there are different scaling solutions like sharding, at the end of the day, the high redundancy negates the possibility of horizontal scaling. What we see as the big differentiator for NMC and what Nillion is building is the ability to fill some of those roles of centralized servers to allow for more horizontal scalability. We only care about value; we don’t care about order. Our network doesn’t care about the order of transactions, which means that not everything in the network needs to run. That trade-off allows you to effectively fulfill roles in the computation stack which blockchains can not execute.

vVv: Realistically, 99% of projects will fail independently without the interference of Nillion. At the end of the day, if you are succeeding, will you play a more passive role in adopting some of the projects out there?

Tristan: Yes, but even if they fail or succeed, at the end of the day, we’re all helping each other. From a hiring standpoint, more individuals who can understand and create mathematically intensive work like Miguel’s equates to a larger candidate pool who can contribute to Nillion in the future and projects who can build products on our protocol. 

vVv: Traditional applications depend on a multi-user access model, and IoT and supply chain applications require multiple-party access to modify object-related information. Is there a method to share information with several parties without a centralized server structure and define different access rights (read-only, modify and delete) on the NMC network?

Miguel: The answer is yes. We are building a native authentication layer on NMC and an authorization layer. The plan is to support access control lists where you can define the number of parties entitled to read, update, and modify inputs from you or permissions to access results from computations performed on inputs. Initially, it will be reserved for computations that pertain to your inputs. Still, afterward, it will also include computations that involve other inputs, such as other parties’ inputs, making it more complex. We will transition from access control lists into fully-fledged role-based access control models. Instead of specifying the identifiers of the nodes entitled to roles, we can assign permissions. With this, we can decouple the assignment of nodes to roles to the assignment of permissions to roles, which grants more power and functionality. In essence, we’re building something that should not surprise Web2 people; it’s just building the same concepts but on top of this decentralized infrastructure.

In essence, we're building something that should not surprise Web2 people; it's just building the same concepts but on top of this decentralized infrastructure.

vVv: How do NMC nodes process data without exchanging messages? To your knowledge, are there other networks that can process data similarly to NMC?

Miguel: Combining SSS with the OTM primitive, which has multiplicatively homomorphic properties, enables you to run products without communication with NMC. As I mentioned previously, we’re combining homomorphic encryption with SMPC. And there’s a special type of SMPC called multi-party homomorphic encryption (MHE), which has similarities and differences. The idea is that dealers send encryptions of their inputs (particles) to the network. It’s exactly the same at this stage. Then the nodes perform fully homomorphic encryption operations. Thus, they locally operate on encrypted inputs, the same as for NMC. However, the complexity of fully homomorphic encryption is higher. In our case, it is linear, like in the operations in plaintext. In the case of MHE, you’re paying a more expensive cost in terms of computational complexity, which is one difference. Another difference is that you will have to deal with noise because you’re using fully homomorphic encryption. Every time you operate on unencrypted inputs, the noise will increase. To reduce noise, you need a bootstrapping protocol in MHE to re-encrypt with a different key that is implemented again with communication. Hence, from time to time, you will need this communication to occur for noise reduction. The benefit of OTM is that it is partially homomorphic encryption and does not have noise accumulation. Therefore no bootstrapping is needed. You can run any number of multiplications without an increase in noise. And then, in the end, MSE collaboratively runs decryption, which is very similar to NMC. The difference in our case is that decryption is a mirror reconstruction using SSS. This enables NMC to use error correction codes, something very cool but problematic with MHE because it would need to be able to detect and correct errors introduced by the different parties. Others are looking for the same combinations of homomorphic properties with SMPC, and another one worth mentioning would be garbled circuits. The concept of a garbled circuit is you take the desired function and garble it, creating an encrypted version. Garbling requires communication, but once you’ve created it, you can run computations without communication. Again, the idea is the minimization of communication. But some communication is necessary for these sender solutions. Garbled secrets require communication once to create the garbled circuit, and with MHC, you need bootstrapping, which involves communication, followed by reconstruction.

vVv: What about regulations like GDPR? Will Nillion provide private data if requested by authorities for legal reasons such as tax-related purposes?

Miguel: We have to make a distinction between GDPR and other regulations. Nillion has or will have, a built-in authentication and an accounting system. And because of that medium, we will have to support different data protection regulations, including GDPR. The core infrastructure: the right to portability, rectification, and all other user benefits empowers individuals to control their information. Nevertheless, on top of that infrastructure, developers will build different products. For example, an anti-money laundering (AML) product, and as a caveat to the product launch, could be compliance with AML directive #5, adherence to auditing requirements. Thus, if you have those agreements and an auditor requests to render some information, the application developer will have to support that request. Nillion provides the protocol for you to implement and satisfy different regulations, whether health regulations, AML, or others. We cannot foreseeably conceive all possible mechanisms to comply with all possible regulations on the market. Consequently, we have decided on this approach where we take basic regulation and compliance about data protection. 

vVv: Most L1 blockchains claim to have solved the blockchain trilemma of security, scalability, and decentralization. Are you making those claims as well? If so, how will Nillion improve upon us all of it?

Miguel: That’s a tricky one. To begin with, we’re not an L1 or a blockchain; thus, no comparison can be made. However, if we compare ourselves to SMPC, we are in a special trade-off position among security, scalability, and decentralization. We have the security from the best SMPC solution, which is ITS. We also support Byzantine fault tolerance, which means we can operate in an environment where some nodes can be bad actors deviating from the protocol arbitrarily without negative consequences. Regarding scalability and speed, that is Nillion’s trademark, where our protocol has very low end-to-end communication. For decentralization, we have an asynchronous solution. While most SMPC solutions are synchronous, our online computational interface is asynchronous, and we provide a fast consensus achieved through error correction codes. Compared to a blockchain, where consensus is based on the value and order of transactions, it’s very interesting to consider. In contrast, we don’t have transactions; therefore, our consensus is on the computational output’s value rather than the order. In exchange for that concession, we achieve a very rapid confirmation of consensus because it does not require nodes to exchange messages. Instead, you run error correction codes locally on your machine and reconstruct outputs from a computation. This is an exciting trade-off because it’s one of the few, if not the only, SMPC network that can have the ambition of becoming a truly decentralized network with a large number of nodes operating in realistic scenarios under asynchronous communications and bad actors.

[Nillion is] one of the few, if not the only, SMPC network that can have the ambition of becoming a truly decentralized network with a large number of nodes operating in realistic scenarios under asynchronous communications and bad actors.

vVv: Does Nillion currently exist solely as a theoretical model, or is there a minimum viable product (MVP)? Do you have a date for product completion or test net?

Miguel: The Nillion protocol is not only a theoretical model but is now a reality, implemented with real nodes running in Rust code. Of course, the journey began with mathematical proofs audited by the Royal Holloway, University of London, and then eventually by other cryptographers. Currently, we are finishing the online pre-protocol which is the part that produces the randomness necessary for the computations. In Q4, we will start delivering the first end-to-end showcase as what I’ve just described – the online and pre-processing phase –  is our infrastructure MVP. The next step is building additional showcases on top of that infrastructure with the first use case, Nil Transfer. And then, in Q3 and Q4 of 2023, we will launch other showcases called Founding Entrepreneurs. These people are close to the organization and have been interested in the project from the beginning. They bring their expertise from other domains, such as identity, machine learning, and other fields. They will be the first to develop solutions on top of our new infrastructure.

The Nillion protocol is not only a theoretical model but is now a reality, implemented with real nodes running in Rust code. The next step is building additional showcases on top of that infrastructure with the first use case, Nil Transfer. And then, in Q3 and Q4 of 2023, we will launch other showcases called Founding Entrepreneurs.

vVv: Are there any specific niches that might be early adopters of your technology?

Tristan: This is a topic we have yet to give much coverage of. We see Nillion as a protocol that bridges the chasm between Web2 and Web3. Fundamentally, we are a crypto-native Web3 protocol built as a public utility on a decentralized network. Despite this, many of our technology’s use cases are not only in crypto but are generally applicable. We’re talking to partners in a myriad of fields: medicine, identity, user data, user analytics for consumer marketing and relations, and conversations with even big players. In essence, there’s not a single sector we want to hear from specifically. We’ve built this cool technology that solves many problems for many people, and if you feel like you have a burning problem that it solves, we want to hear from you.

We see Nillion as a protocol that bridges the chasm between Web2 and Web3. Fundamentally, we are a crypto-native Web3 protocol built as a public utility on a decentralized network.

vVv: How do you envision building a strong and resilient community to support Nillion creatively? And then also supporting the Nillion network of nodes?

Tristan: From a developer perspective, how do we get them on board? That’s one of the easier items to sell compared to if we were starting a blockchain or a Layer-1. When people come into our community and ask if this is a blockchain, Layer-1, or Layer-2? We must step back and say, “this isn’t the right operating model.” We are a new piece of the decentralized computation stack. People will want to build on us, but we’re not going to be a network where you bridge in and have native DEXs, lending protocols, and all these other things. Our unique value proposition is creating opportunities to build a different suite of products like managing secrets, performing data analytics, and secret inputs and outputs for machine learning and AI. There’s no comparable parallel to be drawn in traditional markets, which means that to build that robust community is to sell people on our novel suite of tools that we’re continually adding to, and that allows developers to build things that don’t exist yet.

vVv: Will Nillion be open source? And if so, do you see open source as something only having positive effects on development, or are there risks involved?

Tristan: Open source, community, and ethos are essential for us and what we’re building. Currently, we are privately building, and that’s an important business decision from a security perspective for our company at this early stage. However, the plan has always been to have the code open source. And, as we get closer to mainnet, we want to have our code out there so that people can look at it. We are already working on packages and libraries for the crypto community and planning to open source that soon.

vVv: How do you plan to attract and incentivize those developers to learn Nada and transition into this new ecosystem?

Tristan: Nada is the native language of Nillion’s MPC for our Multi-party Programs (MPP). Nada is not a primary language, and we’re not trying to build a community of hardcore Nada developers. When our compiler is fully functional, we would like an easy way for people to migrate their ideas and logic from other blockchains and dApps. Simply put, Nada is a transpilation target from other more popular and widely used languages. If somebody wants to make private smart contracts on the Nillion Network, we’ll have a Solidity transpiler from Solidity or the EVM network. Therefore, it’s less about convincing people to learn Nada and more about performance efficiency. We want to streamline an easy onboarding method for anyone to transition to Nillion with their ideas and receive our security and services.

vVv: NMC principle assumes the dealer node is in a secure environment without malicious tracking of secrets or ATM processes. A caveat is that compromising the user SYSTEM bypasses the NMC network to ensure a secure environment for the dealer node. Is it easier and more performant to retrieve data from the network, perform the computation locally and then redistribute?

Miguel: Nillion can be used as a storage facility; that’s always a possibility. You would retrieve secrets from it, personally compute them, and then store the results on Nillion. However, this method severely compromises the security each time you store, retrieve, or compute the secrets. The attack vector is larger as a client because you are vulnerable each time you retrieve and compute the secrets. Furthermore, you’re omitting some exciting use cases. Not all use cases are about computations on your secret; some are about combining your secrets with those from others in a single computation. One thing that comes to mind is a neural network. For example, the owner of a neural network stores the weights of that neural network as secrets in the network. You want to run that neural network without exposing them, the secrets, or the weights and receive and reconstruct your results. In the model you’re suggesting, this wouldn’t be possible because a node would have to see everything to run it in plain text. There are many use cases like this where there’s no other way of performing them except requiring secrets to be computed in secret without being revealed to any user.

vVv: How does Nil Message Compute (NMC) compare to ZK-rollups? What are some of the significant differences between the two, and what makes NMC a better choice?

Tristan: That’s an interesting question, but It’s not fair because ZK-rollups have a specific job: to scale a Layer-1. They don’t care about the privacy aspect of ZK-rollups; they’re compressing a whole block into a smaller proof than the block itself and then proving that the state change is valid. NMC is working with real data and maintaining privacy standards for the computations, and we’re not proving that on-chain. We’re not going to run NMC within the EVM or some equivalent because that model doesn’t work. There’s no equivalent verifier, metaphorically speaking, between the two. But there are interesting parallels between what NMC and ZK-rollups can do. We’re not directly competing but are tools for different jobs. There are many cool tools that people are trying to build with ZK-rollups, which we foresee as more straightforward or interesting to implement with NMC due in part to composability. The output of a ZK-proof is mathematical proof. And that proof isn’t generally composable with other proofs for novel functionalities; it is only for validation purposes, whereas NMC results in data that can be applied, combined, and so forth.

vVv: Nillion provides novel solutions which have the potential to disrupt specific industries as a whole. How do you plan to navigate these potential risks?

Tristan: Outside of crypto, there’s a lot of disruption to be done with authentication, like Okta, and not just authentication but in a variety of sectors with sleeping giants who perceive Nillion as a crypto protocol in Web3 and not a serious competitor. In that regard, it will not be difficult to sneak up on them. Within Web3, specifically for ZK-rollup teams and other privacy-enhancing technologies, I understand the desire to market a cutthroat narrative. We’re at the beginning of an exponential curve for privacy-enhancing technologies, and these different technologies will change how we operate on the internet. There’s sufficient room for anyone building in a productive direction right now; we’re all building this industry together.

vVv: NMC performs continuous computations on all stored information. However, unsynced particles in offline NMC nodes are invalid. Is there a method for an NMC node to recognize the change in state and update the data? If not, is this increasing proportion of invalid data problematic for the node network?

Miguel: Excellent question. Yes, that can happen. For whatever reason, if a node goes offline and reconnects, it will be out of sync. In the absence, the other nodes have continued performing computations, and the out-of-sync node will require updating. We have general protocols for joining the network as a node, leaving the network, or recovering from a failure, as we’ve described. The main goal is to preserve the consistency of the data; different nodes have the same particles from a single input. In addition, there are shares protecting these particles, which are different. For example, the share that I hold from the same blinding factor will be different from that of Tristan. Some protocols can address these situations and reshuffle the shares and update nodes. However, this requires communication and thus has an associated intrinsic cost. Luckily, the network is unperturbed because there is tolerance for offline nodes. The threshold is 1/3 of offline nodes to maintain network performance. And when offline nodes resume function, they can resync without causing significant disruptions. If this occurs frequently, you can fine-tune this parameter, but there are consequences. In essence, even if a node goes offline, there is no need to be hyperactive. The network can continue to operate and recover, although with some communication costs.

vVv: Knowing that years of public scrutiny and testing has been required in the past to identify and publish new encryption algorithms (NIST requested proposals for AES in 1997. It was approved and published as a standard in 2001.), can you outline if and when Nillion is looking to be compliant with NIST and what those steps could look like for a new cryptographic primitive like NMC?

Miguel: NMC is not reliant on cryptographic assumptions and, thus, easier to prove correct. If we were reliant on cryptographic assumptions, that would be highly problematic. Our task would be convincing the community with sufficient evidence that concretely proves the mathematical hardness of our cryptographic assumptions. 

Fortunately, NMC is predicated on linear secret sharing and OTM, both widely accepted technologies that have been proven in the literature. OTM is essentially One-Time Padding (OTP), a very old technology. The only difference is that we exercise multiplication against a group element in lieu of addition. This concept is similar to padding, where one message with something uniformly random and independent is used to create something uniformly random and independent. Because NMC is based on pre-existing technologies, we expect the community to be very receptive to NMC. 

vVv: How do you envision broader adoption of Nillion’s technology by commercial entities or governments?

Miguel: I’m an advisor to RUSI, one of the oldest UK cybersecurity think tanks, and I have held many encouraging conversations with regulators in the finance industry (FCA, FINMA, and FinCEN). They are open and familiar with ITS because it does not leak information. However, they are not open to encryption based on cryptographic assumptions or hashing because it can be brute forced with enough computational power. We have yet to receive an official confirmation, but if there’s any flavor of cryptography that commercial entities and governments will be comfortable with, it would be ITS.

We have yet to receive an official confirmation, but if there's any flavor of cryptography that commercial entities and governments will be comfortable with, it would be ITS.

vVv: The security of the code that implements cryptographic algorithms into smart contracts is of utmost importance. Can you share how Nillion is auditing its code?

Miguel: This is a critical problem to be cognisant of. Despite mathematical proofs being theoretically sound and proven, their implementation and execution are entirely different. We aim to dissolve this disparity with our in-house implementation process and a two-step infrastructure audit. First, starting from the most theoretical to the more realistic and adding different non-functional requirements renders the implementation more practical without losing traceability to the original proof. In this manner, we can test not only the functionality but the security of the protocol is preserved. Secondly, we run standard statistical tests to independently check uniform programmatic variables and ensure no informational leaks. However, the introduction of Nada programs presents another security risk. When we compile infrastructural arithmetic circuits, Nada adds another level of abstraction. Although you can not alter the infrastructure, it will be possible to leak inputs. Therefore, there must be a governance process for creating and publishing natural programs to protect sensitive user data; currently a work in progress. 

vVv: ITS claims to be secure against computationally unbounded adversaries like quantum attacks. Are there other vulnerable areas that might create a high-risk to technological advancement?

Miguel: ITS particles are protected by blinding factors, distributed in a shared form, and can be published publicly on a blockchain or wherever one desires. Shamir Secret Sharing (SSS) and other linear secret sharing schemas are threshold based. The threshold in the network determines the maximum or the minimum number of nodes colluding that would be able to obtain a secret. You’re still protected if the blinding factor is below a given threshold. Even with futuristic quantum computers, there is no vulnerability to ITS. Conversely, if this threshold is breached, they can reconstruct the secrets. Thresholds are a necessity because, eventually, we have to reconstruct the secrets or the results from a computation. And because of this, we need to be very careful in achieving a fully decentralized network and start by operating a few trusted nodes. As the network grows and the risk of civil attacks reduces, we can lower the requirements for joining the network.

vVv: How secure is OTM itself compared to homomorphic encryption, and can NMC be applied by using homomorphic encryption coupled with LSS instead of OTM?

Miguel: With OTM and other ITS primitives, there is no underlying cryptographic assumption meaning the protocol is not based on the hardness of any mathematical problem. This also makes ITS quantum-resistant. In comparison, most Fully Homomorphic Encryption (FHE) schemes are lattice-based, and while not subject to quantum attacks, they are not ITS. This means FHE is safe, but only because there is currently no known attack method. To note, there is cryptographic assumption behind FHE, which posits it as quantum safe, but it’s an assumption nevertheless. 

Theoretically, you could build NMC using LSS. Linear Secret Sharing (LSS) combined with Fully Homomorphic Encryption, is what I previously mentioned as the Multi-party Homomorphic Encryption (MHE) paradigm. Again, the limitation of MHE is the bootstrapping that requires communications which has an intrinsic cost and higher security risk. That being said, NMC is not just ITS. NMC is an umbrella of different solutions, touching different points in the trade-off between security and speed. One example would be conditional statements. If you have a program with conditional statements, ITS is not the best encryption if a is greater than b because it partially deletes order information between a and b. In this case, NMC, combined with order-preserving encryption, is a better choice. And we want to present the whole umbrella of technologies ranging from pure ITS to incorporating other technologies, such as FHE and order-preserving encryption, into our suite of solutions.

We want to present the whole umbrella of technologies ranging from pure ITS to incorporating other technologies, such as FHE and order-preserving encryption, into our suite of solutions.

vVv: How do you determine the ideal T+1 number of shares required for LSS to construct the blinding factor (where T means you construct nothing)? Is it determined by the number of online network nodes or assigned nodes per computation? Does it vary by size and resource requirements of the computation? And is the minimum threshold to maintain security T+1, 1/3 of the nodes in the network?

Miguel: Part of this question involves the economics of an attack which I will leave to Tristan. I’ll start with the more technical aspects. Imagine you have a linear scale, and on one end, you have an attack that alters a computational result, and on the other end, a different attack that gains information from the secrets. This threshold will move between these two. And the closer you are to one type of attack, the more resistant against that attack, but at the cost of being more vulnerable against the other. This is where the 1/3 tolerance for bad actors comes into play. 1/3 of the nodes can do whatever they want with both types of attacks to no effect. In addition, it’s ratio-dependent and not quantitative except for the economic consequences, which Tristan will touch on. Essentially, it’s a ratio of the total number of nodes to the number of bad actors.

Tristan: This delves into the philosophical discussion about decentralized networks in general and security governance of the network. We could have made a completely permissionless protocol on day 1 allowing anyone to host a node, stake whatever they want, financial, other assets, their reputation, etc.. And then grant users autonomy on node selection for computations. The selected nodes would comprise a pool and compute the user’s secrets. The scenario is impractical with many implications, not the least of which are the security implications of malicious or ignorant users and bad nodes. We want to ensure that the Nillion network is of the highest security. 

Decentralization and fully permissionless narratives are attractive in the crypto space but it’s at the cost of security and exploitation of protocol users and builders. To tie into that, we are also taking into consideration node operators and their individual network stake based on capital investment, restaking, and reputation in our models. This allows us to model probabilities of bad actor nodes and network parameters to build the most secure environment.  

vVv: Thank you very much, Tristan and Miguel. I hope you guys enjoyed the AMA as much as we all did. We have enough questions to fulfill at least 5 more AMAs, and I would be very excited to have a future one. 

Tristan: Yes, it’s been amazing. Thank you very much.

Miguel: Thank you very much. It’s been an absolute pleasure.

Listen to the Spotify recording below for the full Nillion Network AMA.

vvv logo

About vVv

Learn about vVv and how to become part of the best-in-class community-driven venture capital fund.

Learn More