6. Centralizing DAO governance Which brings us to...the NNS DAO of course. The idea is to have a decentralized governance mechanism that can seamlessly upgrade the entire protocol. This indeed feels like a nice gesture to have, and in practice it has been working very well. But it is extremely centralizing at the moment. Like PoW leads to mining pools, it seems that pure liquid democracy in this case leads to everyone choosing to follow the most trustworthy-appearing entity, which in this case seems to be @dfinity This essentially gives DFINITY write access to the protocol, as long as they have the followers. I contend that a governance mechanism should not ever allow one entity to have that much power, no matter if the people want it. There's a reason decentralized governments have term limits and separation of powers, the point is to diffuse power and prevent it from ever accumulating too much. The NNS DAO lacks these basic checks and balances. I believe these things could be improved and changed to some extent. But do we even need an all-powerful DAO, even one with checks and balances? Is it possible to just build a simple protocol, like IP/TCP, that requires few changes and is robust over years? That seems to be the aim of AO. ICP has not chosen that path. With AO processes bring their own VMs, their own implementations it seems. So this moves the complexity of much of the implementation details into the developer, or above the protocol layer. The protocol is just a base layer for base compute, memory, and storage primitives. This is an intriguing and beautiful idea. The power of the NNS seems a major risk to adopting ICP for an application, because they will be subject to the possibility of one entity shutting them down. Until the NNS provides better guarantees, that is currently the reality. I talked to one team at ETH Denver that said this capability of the NNS was a non-starter for their project. I don't believe they're the only ones.
5. Rigid network architecture We just touched on this quite a bit actually. But beyond the canister choosing its level of security, I wonder if it would be very beneficial to allow node operators to bring various types of hardware, and to configure themselves as they wish...in fact, I was never a fan of the subnet architecture at the beginning, at least. I feel it's yet another complication and limiter to devs, yet another thing that must be thought of.
The arguments for the subnets probably focus around...well at least one thing will be ensuring a homogenous compute environment where latency, speed, etc can be well-regulated and give devs consistent guarantees.
Perhaps a homogenous network is required for this, perhaps. But I wonder if AO can achieve the same thing, as each process and message specifies CPU and memory requirements. If there are enough CUs available i would hope they could process things at appropriate speeds.
We shall see. If AO can provide good guarantees and latency with a heterogenous network, this design seems better. Anyone can bring whatever hardware they want, they will only process what they can. Applications/processes can determine the security that they want through stake and CU replication. It's very permissionless and flexible, there is no need for a central authority like the NNS DAO.
4. High costs It costs about $1 to upload a GiB of data, for example. Everything is replicated on very powerful machines, so in the end you have to somehow pay for those machines to perform the replicated computation and storage. I'm not sure how to overcome this, and AO might not be any better considering which and how many CUs you run. But AO is much more flexible in allowing your application to choose its level of security and replication. ICP canisters lack this type of flexibility. They can't choose machine type and they can barely choose replication factor. The subnet architecture is so rigid, there are basically only two types of subnets to choose from for the moment.And once canisters are deployed, they are stuck in those subnets.Imagine if canisters could easily choose the level of security they required, and could even move up and down in their desired level of security dynamically. This could help with choosing the appropriate costs and latencies for each application.
3. High latencies This one is really killer (in a bad way), and I fear it greatly because to maintain pBFT-like consensus there seems to be a fundamental floor on latency. ICP can optimize but that floor will always be there without some revolutionary breakthrough or other trade-off or rearchitecture. AO seems to have very good latencies of under 1 second from what I know...I don't believe that is with consensus across multiple processes though. But if you embrace staking and crypto-economic-security instead of pBFT-like, perhaps in practice you get similar levels of security with much less latency? This is a very interesting point to ponder.
2. Message limit of 2/3 MiB Any data going in or coming out is subject to this limit. It makes it very difficult (maybe not so much once you chunk, but then all clients in all languages for all purposes must implement this chunking) and very slow to upload data. File uploading is the worst here...for the video streaming demo I only uploaded a ~600 MiB file, and that take a few minutes. I'm not sure how AO compares... that's a good question to find out. I would love to see ICP solve this problem. Abstracting the chunking away generally would be a good start, this could perhaps be done at the boundary node level and/or similar to how Wasm binaries are now uploaded. But that won't solve how slow it would still be to upload files. We've got to solve this somehow.