Let me briefly talk about how MR (virtual reality mixed) will be combined with Web3 in my opinion.

Let's talk about MR first. The concept of MR may be more unfamiliar than VR and AR. The former two can be simply understood as a three-dimensional imaging perspective, which can be directly applied in games, movies and other fields.

MR, on the other hand, integrates into reality for interaction based on this perspective. For example, as Brother Sun usually says in his videos, in the head-mounted display device, it is a 3D perspective, and you may be picking up gold coins, but in reality you may be sweeping the floor.

This is a basic example of MR integrating virtual and reality. If this thing is a two-dimensional image, it may be picking up gold coins on the map. You can only see a certain game character completing this action, but in the MR perspective, you complete a certain action (picking up gold coins, building virtual buildings, etc.) in a virtual environment by interacting with reality.

In fact, these concepts already existed before the launch of Apple Vision, but after the launch of Apple Vision, it can almost be said that these concepts have been taken to a higher level.

And because it is the main product line launched by Apple, it can be expected that with the iteration of versions, with the support of Apple, it is likely to lead the trend of user perspective iteration in the near future.

As for how to integrate with Web3❓

I planned two perspectives for the project I originally served, serving the B-end and the C-end 👇🏻

  • -B-side: Upgrade the MR perspective for the current gaming ecosystem or educational projects in the industry

  • -C-end: Self-developed MR applications are provided to visiting users like games

The above is a brief summary, and I won’t go into the specific details, but B-side cooperation requires certain industry resources. My suggestion is to choose a certain ecosystem, preferably one that provides more support to the game ecosystem. I think some of you can guess which one it is.

The general logic is to upgrade and provide perspectives, and their ultimate strategy can also serve as the MR layer of the infrastructure to provide technical access to projects that can be applied to the MR perspective.

This is a relatively large narrative, but it is not the mainstream of the market at present, and the first-tier institutions are not paying much attention to it.

But in this cycle, you can actually focus on AI and Depin, because some of the technologies in them are compatible. When the next cycle or the wind of Apple Vision blows, it will not be too late to adjust the tone.

-In summary🔻

(1) In fact, for the C-end, it is a unique way of experiencing the game. The design logic of the entire token economy can also refer to the mainstream dual-token model of Gamefi;

(2) For the B side, it is not only about transforming the two-dimensional picture into a three-dimensional picture, but also about adding the user's interaction with reality;

(3) Risk points: Because it involves realistic elements, some regions may hinder the advancement of this type of project in the region, refer to Stepn;

-Obstacles to large-scale application🔻

For MR type applications to be promoted on a large scale, it will be somewhat dependent on the scale of hardware. Therefore, I think MR should be developed first with Web2, and then with Web3.

There are many teams that can create 3D imaging perspectives, but if there is no hardware support, it is actually no different from our ordinary games and some garbage metaverse applications we have experienced before.

Although the hardware is not specific to Apple Vision, Apple can build an ecosystem for this type of application, which is the most important link.

As for the equipment, I have tried it myself and I still feel uncomfortable with it. And the equipment is still a bit large. I don't know why Sun Ge is having so much fun watching it.

Some superficial views, thank you for your correction~