• 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
Decentralized computing and storage
"We were contemplating how to invent a blockchain infrastructure resilient to centralized vulnerabilities"
Sonya Sun
April 3

About the background

I first heard about Bitcoin a long time ago — I was mining it at a time when one could still get something done on a computer. Ethereum's emergence brought forth a massive idea: now, blockchain technology can be used not only to reinvent the financial system and money but also to take a broader view of things. Ethereum introduced a virtual machine on top of the blockchain, leading people to contemplate a wider variety of applications.

About Fluence Labs

By web3 standards, our company is relatively old. We began our venture back in 2017. At that time, both Dmitry, our co-founder, and I were observing the blockchain scene from the sidelines. We were captivated by the idea of reinventing internet infrastructure using the blockchain to address the problems of centralization, ownership, and control of infrastructure, applications, and services.
There's DNS on the internet, but it's controlled by an American organization. It was established as a neutral entity, but it still turns a profit, and specific individuals make decisions within it.
There's also the cloud infrastructure where a significant portion of the internet and all its services are hosted — and there are only three providers: Amazon, Google, and Microsoft. When such giants control the infrastructure, numerous pain points arise where things can go awry. For instance, technical issues can emerge when an admin or DevOps shuts down a cluster or data center. Alternatively, a government might dictate, "You're on our territory — ban this client". Essentially, any company employee might act adversely towards clients, and there are millions of them.
Article image
That's why centralized control isn't good. So, we started contemplating how to create a blockchain infrastructure resistant to centralized vulnerabilities.

Back then, the state of projects was such: Ethereum existed, and it had a handful of early competitors. In terms of cloud computing analogs, there were projects like StorJ, DFinity, and Golem. That was essentially it. We delved into various databases and encrypted data. We researched various aspects and spent a significant amount of time to discern what made sense to pursue. Eventually, we arrived at what is now Fluence.

Today, Fluence is a decentralized computing platform and a marketplace for computational resources. This means that any Raspberry Pi owner or professional data center can join the network and be ready to provide their resources and earn a little money. On top of these resources, there's a developer stack. Our primary customer is the developer who uses Fluence in the same way they would with clouds. Clouds offer serverless solutions — these are databases and cloud functions that allow one to abstract away from the hardware they run on and not worry about automatically scalable servers.
Fluence aims to bring a decentralized counterpart to the traditional service for developers. To start using Fluence, one just needs to write functions that will implement the backend of their application and deploy them in our network. Developers can deploy to one or multiple providers. Furthermore, Fluence offers several built-in features, such as Fault Tolerance.
All of this can be described in our programming language — Aqua. It was specifically designed to create securely distributed systems.

By default, everything you deploy is instantly deployed across multiple providers using a coordination algorithm. Fluence provides tools to customize and program Fault Tolerance, scalability, and everything needed on the backend.

I can program how my backend, along with various instances or functions deployed across different providers, will collectively operate to achieve exactly what I need. Parts of this will be inherently built into Fluence by default. Currently, for instance, there is Fault Tolerance, but there's no Autoscaling feature. And you can pay for all these with stablecoins on the blockchain. Or, for that matter, any other ERC tokens.
Another critical development we're pursuing is computational proofs. These help sidestep situations where one has to construct a provider reputation system. Essentially, we're crafting a more stringent model wherein one can rely on the accurate execution of code within Fluence. Blockchains also guarantee this through consensus, but in blockchains, hundreds or thousands of nodes execute the same calculations to ensure security through consensus. With Fluence, there isn't a global consensus, but there are proofs of accurate code execution. When you deploy a function across multiple providers, you can optionally add consensus, but this effectively makes the execution costlier.
Fluence is currently in its private testnet phase. We have developer tooling and payments in place, but there aren't any computational providers on the marketplace yet. If you want to use Fluence now, you'll have to work with nodes that we exclusively provide. Of course, this is free of charge. Our next step is to progress to a public testnet, onboard computational providers, and eventually transition to Mainnet with a live economy and payments.
Take Filecoin, for instance. It's a decentralized marketplace for storage providers. There's a client interface where files are uploaded. They have an incentive mechanism where providers commit to storing files for a set duration. If they fail to do this, they lose their stake. If they abide by the commitment, they validate it with proofs. This model closely resembles ours.
We also have a provider marketplace, just with different hardware. Similarly, we have a proof system; however, from a developer's standpoint, it’s a slightly different product and tier.
Another example is the Akash Network. Their approach is more straightforward. They too have a marketplace, but it essentially matches a client with some provider. However, there are no guarantees, for example, that the provider won't go offline.
Article image

About distributed computing and storage

I believe that decentralized distributed computing is more complex due to the vast variance in use cases, formats, packaging, and tasks.
A database is more about computing than just storage. It's both storage and computing. However, the industry began thinking about storage earliest. For instance, IPFS kicked off around 2015, releasing their first White paper. It was only later that thoughts shifted to computing. And only now are we seeing relatively usable products emerge. Not all of them run on the chain, but they offer a viable product developers can utilize.

Building infrastructure is quite challenging. There are many tasks that need to be refined to a basic level. For instance, we developed a marketplace where you can deploy whatever you want up to a usable product. This means that a customer has guarantees that the code they wish to execute will indeed run, and the provider executing the code is assured that the code won't break out of its container and mess up their operating system.

We had to invent many low-level concepts. Specifically, we developed Aqua - a distributed execution protocol and domain-specific language. With it, you can script computations in terms of nodes and functions to be executed, supplemented with conditions, operators, loops, and so on. Aqua is analogous to AWS Step Functions, but it can securely run in a public P2P network and offers more linguistic features.
Every Fluence node houses a virtual machine that runs the Aqua language. All nodes serve it. Each node caters to incoming Aqua-based requests. Every request is a data packet containing a script detailing how precisely the request is executed across different functions on different peers. This is how we construct an infrastructure capable of performing distributed algorithms without having to redeploy them continuously across all nodes or a subset of nodes. This is because these algorithms are programmed within the originating requests.

The simplest example of what can be done is a P2P chat between two devices using the Fluence network in between. In such a scenario, the Fluence client on one device would create a data request, essentially saying, "Go to this node and execute this function, then send the function's result to that device." In this manner, we'd transmit a message from one device to another via a Fluence node. But beyond that, things get significantly more complicated.

Our primary challenge lies in inventing low-level infrastructure without relying on the cloud. We don't use cloud tools because they're designed for trusted infrastructures where someone oversees the entire setup. Our aim was to create a system that can exist without a central administrator.

About Multicoin Capital's [object Object] contribution to distributed infrastracture

They believe in distributed infrastructure and can invest in the most viable projects in this domain. When we announced our funding round, Ceramic did the same. Ceramic is a distributed database. The challenge of implementing such an idea has been around for many years, but Ceramic is among those who've approached it correctly. Many appreciate their solution, which is why Multicoin invests in them.
In Multicoin's portfolio, there's RenderToken — a project dedicated to rendering videos and everything else on GPUs. Ethereum has transitioned to Proof-of-Stake, implying that many GPUs are becoming available since the profitability of mining on other chains is considerably lower. Now, the challenge is finding applications for these GPUs. Some projects are trying to address providers on these GPUs to render videos or conduct machine learning.
Multicoin also invested in a project developing a distributed CDN. It's an idea that has been floating around for years, but there's no clear leader with a proposition.
Kyle, the founder of Multicoin Capital, consistently emphasizes a similar point — Fluence should have clear use cases and customer understanding. Kyle always advised against spreading ourselves too thin or creating a platform for everything. Instead, he recommended focusing on specific segments and understanding our market.

About Fluence's use cases

We position ourselves as a private testnet, not as a production. Right now, we aren't aiming to onboard the maximum number of customers or push them to launch real products on Fluence. We have many partnerships and experiments, but we aren't aggressively promoting them.
In the future, we want to address all cloud users who, for instance, utilize AWS Lambda. That is, Fluence's use cases resemble those of cloud functions – data pipelines, data processing, various bots, and so on.
Fluence is valuable to web3 as oracles – it can index chains and develop bots. However, there are some features lacking, such as support for popular programming languages like Python or JavaScript, which can be used to write functions to deploy on Fluence.
We compile everything into WebAssembly, so currently, Rust and C++ are supported. However, people want Python and JavaScript. We are working towards incorporating Python for sure. This will significantly expand opportunities and lower the entry threshold for developers. Right now, many refrain from using Fluence simply because they aren't familiar with Rust.
Also, in Fluence, users can choose providers. There are algorithms that hide this magic behind the scenes. We don't yet have filters to select providers based on geolocation and hardware, as these are non-trivial challenges. How can you verify that a provider's servers are genuinely in Europe if they claim so? Or if they boast powerful hardware – how can you ascertain its authenticity?
We aim to implement Proof-of-Capacity – it's about rewarding the allocation of specific computational resources to the network. With this technology, one can prove that they have a certain amount of resources determined by CPU memory. It's a direction we would like to head towards in the future.

About opportunities and downsides of providers

Technically, even with a laptop, one can become a provider. However, it's doubtful that anyone would appreciate it. Laptops are slow and might accidentally shut down. The protocol itself isn't restrictive, and if someone has a network of interconnected laptops willing to perform tasks cheaply, then that's fine.
It's more a matter of demand. We focus on scenarios, for instance, where there's a backend service that needs to operate swiftly. In such cases, not all laptops will suffice. But, if we're searching for extraterrestrial life, then why not?
Article image
The primary hurdle lies in the developer experience. Take Vercel, for example, which frontend developers and JavaScript coders use to deploy JavaScript applications. They've incorporated functions that internally run on AWS Lambda. This shows how deeply AWS is integrated into many services. Meaning, individuals are employing Amazon's infrastructure not directly but through various services.
Heroku, for instance, also allows for anything to be deployed to Amazon. In this regard, the current state of Fluence and other decentralized technologies isn't on that level of integration. Consequently, even the tools surrounding Filecoin are rudimentary.

People have become accustomed to not even working with Amazon API directly, but through a layer that simplifies development. Such an experience doesn't exist in web3 infrastructure yet. But it will get there. It takes some time to refine the basic tools and start integrating with features that enhance the developer experience.

The issue of providers masquerading AWS nodes as Fluence nodes can be resolved purely from an economic standpoint. When a provider with its data centers enters the network, they quickly undercut such resellers because their costs are considerably lower.

For example, on Amazon, there are various pricing models: default, cheaper ones, and there are spot prices for resources available only for a limited period. These temporary resources are much cheaper. The most affordable hardware you can get on Amazon is at spot prices. If one devises an algorithm to swiftly purchase hardware and then migrate to another, you can have the cheapest infrastructure from Amazon. Yet, even with such pricing, hardware would still be more expensive than owning data centers.

There's nothing wrong with people running Fluence nodes by purchasing hardware on Amazon. Code can still be deployed, and computational resources are available. However, this doesn't address our decentralization challenge. The higher the demand, the more professional providers join. And as soon as we reach a point where we have data centers, it becomes financially impractical to compete with Amazon.

About the future of decentralized computing

In reality, it's all gradually evolving. Over the past few years, there have been numerous instances of censorship occurring anywhere: GitHub banning accounts or projects, Twitter starting to behave unpredictably. As soon as a large corporation censors its users, or a government forces a corporation to censor users, there's an immediate move to protect that infrastructure.
Article image
Generally, the demand to reduce platform risk has long existed among businesses. The question is whether there's a solution for this. Say, I want to reduce risk and migrate from Amazon today. To where? Well, I can set up servers in data centers. And then what? Go back to the way things were 20 years ago before the invention of the cloud? I want the features and ease with which I manage my cloud infrastructure, but with fewer censorship risks.
Our focus right now is to finalize everything as quickly as possible. We want to spur greater demand for Fluence and decentralized infrastructure in general. That's the first point. Second, effective economies enabled by blockchains are now in operation. Knowing that Amazon and other clouds have a high margin on cloud services, we're confident that the cost of computation and using Fluence will be lower than other clouds, while the experience will be superior.
Price is a crucial factor for businesses using clouds. If modern businesses rely on them, they spend a lot on it, and optimizing those expenses would certainly be welcome.
We believe we can compete in two areas: platform risk and low prices. How much lower I can't say yet, but we hope it's at least tenfold.

About the comminuty and the Fluence token

Right now, we're targeting developers as our community. But "community" is a broad concept with various roles within. When we launch in Mainnet, we'll have computational providers and token holders – people who will participate in the DAO and governance.
Currently, we're partnering with developer communities. We also host events and sponsor hackathons. We recently partnered with Developer DAO during the Denver conference. We also have some yet-to-be-announced partnerships in India. We aim to keep Fluence in the limelight, even if the project isn't entirely ready for production. It's essential for people to try it and give feedback.
At hackathons, we often have bounty programs. As of now, we don't have a list of active bounties outside of hackathons, but we aim to establish one. In web3, it's entirely normal for projects to have bounties, using them to attract developers.
Mainnet launch means introducing computational marketplaces and on-chain payments for us. We'll definitely be launching on the EVM, but we're still deciding on which specific chain. And there will certainly be a token. We're targeting the Mainnet launch for this year and genuinely hope to make it in time. But plans are always just that. However, we're firmly committed to realizing ours.
Article image
Bootcamp: Web3 Developer
From the basics of JS to your smart contracts.
Learn more
Share
You might also like
    Interested in diving into crypto?
    We're here to help!
    or
    Or connect directly