Binance Square
LIVE
Hypemoon
@Hypemoon
Your portal into Web3 culture. Powered by HYPEBEAST.
Suivis
Abonnés
Like(s)
Partagé(s)
Tout le contenu
LIVE
--
Home Improvement Retailer Lowe’s Now Offers ‘MFers’ Character-Infused Garden FlagLast June, when home improvement conglomerate Lowe's made the decision to enter into Web3, its focus on sustaining pandemic peak sales growth came at a time where demand for home improvement goods were beginning to stabilize from the global COVID-19 outbreak. That foray into the metaverse was met five months later when the retailer debuted its Measure Your Space tool, enabling Lowe's shoppers to measure and organize their spaces through its mobile app. Its NFT collection, which also debuted last June, was marketed towards builders who wanted to virtually visualize their workspaces. The 500 free 3D digital asset collection allowed for customers to download and use them in Lowe’s metaverse hub, Open Builder.  Seemantini Godbole, chief information officer of Lowe's, previously stated the retailer’s efforts over the years in infusing new technologies into the planning and shopping experience, acknowledging the benefit Lowe’s customers receive from being able to explore and test home improvement projects in the virtual world before taking the leap into implementation in their real-world homes or job sites. On Friday, CoinDesk revealed that the home improvement giant released a physical garden flag earlier this week that featured the popular pixelated characters from the popular NFT project, ‘mfers’, which currently exists in the public domain under a CC0 license that allows anyone to use mfers NFT characters to create any type of commercial good.  gm mfers#mfersinthewildhttps://t.co/4lSSzCezUe pic.twitter.com/mfp8FHnsQZ — MSV (@matthewvarnell) June 5, 2023 Stephen Thompson, a lawyer by trade, and his brother-in-law Matthew Varnell, recently started the company Total Marketing Web3 (TM3) as an umbrella company to oversee different projects that seek to mesh NFT storytelling with real-world commercial products and applications.  Their “Evergreen Siezenals” garden flag, which is currently available on Lowe’s website for $39.98, is TM3’s first product that sits at 1 foot wide by 1.5 feet high, featuring the phrase “cc0 summer 2023” above an image of two mfers characters sipping tropical drinks on the beach.  Evergreen, a lawn & garden manufacturer which currently has an existing business relationship with Lowe’s, was the next step, according to Thompson, in helping retailers like Lowe’s build brand loyalty among younger consumers who are prioritizing their digital identities in the real world.  "The Web2 industry, as far as our experiences goes, is very curious and eager to tap into what is happening here in Web3," he told CoinDesk. Launched in 2021 by a fictitious artist named “Sartoshi,” Mfers are hand-drawn characters that were inspired by the “Are ya winning, son?” meme that have garnered a strong following and collector base, having an original mint price of approximately $320 USD.  Last June, Sartoshi transferred the project’s smart contract and ownership to the community, despite the project not presenting any utility or roadmap as to its longevity. With over $125 million USD in sales, according to OpenSea, Mfers currently trades at just over $1,000 USD.  In November, Home Depot also hinted at its foray into Web3 after filing approximately 24 trademark applications last November, including plans for home improvement services in virtual and augmented reality. In other news, read about TikTok's now former COO, V Pappas, submitting their resignation and looking towards a blockchain future. Click here to view full gallery at Hypemoon

Home Improvement Retailer Lowe’s Now Offers ‘MFers’ Character-Infused Garden Flag

Last June, when home improvement conglomerate Lowe's made the decision to enter into Web3, its focus on sustaining pandemic peak sales growth came at a time where demand for home improvement goods were beginning to stabilize from the global COVID-19 outbreak.

That foray into the metaverse was met five months later when the retailer debuted its Measure Your Space tool, enabling Lowe's shoppers to measure and organize their spaces through its mobile app.

Its NFT collection, which also debuted last June, was marketed towards builders who wanted to virtually visualize their workspaces. The 500 free 3D digital asset collection allowed for customers to download and use them in Lowe’s metaverse hub, Open Builder. 

Seemantini Godbole, chief information officer of Lowe's, previously stated the retailer’s efforts over the years in infusing new technologies into the planning and shopping experience, acknowledging the benefit Lowe’s customers receive from being able to explore and test home improvement projects in the virtual world before taking the leap into implementation in their real-world homes or job sites.

On Friday, CoinDesk revealed that the home improvement giant released a physical garden flag earlier this week that featured the popular pixelated characters from the popular NFT project, ‘mfers’, which currently exists in the public domain under a CC0 license that allows anyone to use mfers NFT characters to create any type of commercial good. 

gm mfers#mfersinthewildhttps://t.co/4lSSzCezUe pic.twitter.com/mfp8FHnsQZ

— MSV (@matthewvarnell) June 5, 2023

Stephen Thompson, a lawyer by trade, and his brother-in-law Matthew Varnell, recently started the company Total Marketing Web3 (TM3) as an umbrella company to oversee different projects that seek to mesh NFT storytelling with real-world commercial products and applications. 

Their “Evergreen Siezenals” garden flag, which is currently available on Lowe’s website for $39.98, is TM3’s first product that sits at 1 foot wide by 1.5 feet high, featuring the phrase “cc0 summer 2023” above an image of two mfers characters sipping tropical drinks on the beach. 

Evergreen, a lawn & garden manufacturer which currently has an existing business relationship with Lowe’s, was the next step, according to Thompson, in helping retailers like Lowe’s build brand loyalty among younger consumers who are prioritizing their digital identities in the real world. 

"The Web2 industry, as far as our experiences goes, is very curious and eager to tap into what is happening here in Web3," he told CoinDesk.

Launched in 2021 by a fictitious artist named “Sartoshi,” Mfers are hand-drawn characters that were inspired by the “Are ya winning, son?” meme that have garnered a strong following and collector base, having an original mint price of approximately $320 USD. 

Last June, Sartoshi transferred the project’s smart contract and ownership to the community, despite the project not presenting any utility or roadmap as to its longevity. With over $125 million USD in sales, according to OpenSea, Mfers currently trades at just over $1,000 USD. 

In November, Home Depot also hinted at its foray into Web3 after filing approximately 24 trademark applications last November, including plans for home improvement services in virtual and augmented reality.

In other news, read about TikTok's now former COO, V Pappas, submitting their resignation and looking towards a blockchain future.

Click here to view full gallery at Hypemoon
For TikTok’s Now Former COO, Emerging Tech and Blockchain Seem to Be the Path ForwardAs the future of TikTok continues to remain a mystery, reports on Thursday presented another major challenge for the company after the social media platform’s COO, Vanessa "V" Pappas, announced their resignation to employees in an internal memo that was later shared on Twitter. Here's the note I sent to all TikTok employees this morning pic.twitter.com/4iB9Ph7b6q — V Pappas (@v_ness) June 22, 2023 "Given all the successes reached at TikTok, I finally feel the time is right to move on and refocus on my entrepreneurial passions. Few had imagined what the last five years would look like and with all the incredible innovation happening now with generative AI, robotics, renewable energy, genomics, blockchain and the IoT, clearly the future will again look much different," Pappas wrote.  Of the many things TikTok has experimented with over the years, its foray into Web3 has been one that hasn’t quite hit as much as many would have hoped, beginning with the integration of blockchain-based music platform Audius in February. The integration allowed users new to Audius to create accounts by linking their TikTok profile, further enabling listeners to automatically import their user handle, information, and “blue-badge” verification status to Audius directly.  But at the heart of the integration was the ability for users to share Audius-native songs in their TikTok videos, further amplifying the reach of both platforms.  Audius co-founder and chief product officer Forrest Browning told CoinDesk in February that because of TikTok’s evolution into a “social media juggernaut” over the past few years, enabling Audius’ integration would attract “an even wider group of talent” to the platform.  In September 2021, the platform released an NFT collection of videos from TikTok’s most popular creators, but like many well-intentioned attempts we’ve seen in the space, the underlying purpose and “why” seemed to be lacking.  Pappas, who for a long time, has served as the company’s face of US-based operations, first assumed the role of COO in May 2021, spending a majority of time attempting to publicly distance its US arm from its Chinese ownership.  In lieu of their departure, Pappas will still remain with the company in an advisory rule, but their departure only adds to the fortuitous nature of the ongoing uncertainty surrounding the company’s legitimacy as lawmakers at both the state and federal level work to actively ban the mobile app on grounds of national security.  With Pappas’ departure, TikTok is moving its Chief of Staff Adam Presser up to head of operations, according to an email reportedly seen by Gizmodo, while former Disney executive Zenia Mucha joins as TikTok’s new Chief Brand and Communications Officer.  In March, TikTok began blocking links to app stores in creators’ bios, according to a TechCrunch report, which reportedly also extended to third-party link-in-bio solutions like Linktree. In other news, read about Roblox and its invitation to gamers to build mature experiences catered to 17+. Click here to view full gallery at Hypemoon

For TikTok’s Now Former COO, Emerging Tech and Blockchain Seem to Be the Path Forward

As the future of TikTok continues to remain a mystery, reports on Thursday presented another major challenge for the company after the social media platform’s COO, Vanessa "V" Pappas, announced their resignation to employees in an internal memo that was later shared on Twitter.

Here's the note I sent to all TikTok employees this morning pic.twitter.com/4iB9Ph7b6q

— V Pappas (@v_ness) June 22, 2023

"Given all the successes reached at TikTok, I finally feel the time is right to move on and refocus on my entrepreneurial passions. Few had imagined what the last five years would look like and with all the incredible innovation happening now with generative AI, robotics, renewable energy, genomics, blockchain and the IoT, clearly the future will again look much different," Pappas wrote. 

Of the many things TikTok has experimented with over the years, its foray into Web3 has been one that hasn’t quite hit as much as many would have hoped, beginning with the integration of blockchain-based music platform Audius in February.

The integration allowed users new to Audius to create accounts by linking their TikTok profile, further enabling listeners to automatically import their user handle, information, and “blue-badge” verification status to Audius directly. 

But at the heart of the integration was the ability for users to share Audius-native songs in their TikTok videos, further amplifying the reach of both platforms. 

Audius co-founder and chief product officer Forrest Browning told CoinDesk in February that because of TikTok’s evolution into a “social media juggernaut” over the past few years, enabling Audius’ integration would attract “an even wider group of talent” to the platform. 

In September 2021, the platform released an NFT collection of videos from TikTok’s most popular creators, but like many well-intentioned attempts we’ve seen in the space, the underlying purpose and “why” seemed to be lacking. 

Pappas, who for a long time, has served as the company’s face of US-based operations, first assumed the role of COO in May 2021, spending a majority of time attempting to publicly distance its US arm from its Chinese ownership. 

In lieu of their departure, Pappas will still remain with the company in an advisory rule, but their departure only adds to the fortuitous nature of the ongoing uncertainty surrounding the company’s legitimacy as lawmakers at both the state and federal level work to actively ban the mobile app on grounds of national security. 

With Pappas’ departure, TikTok is moving its Chief of Staff Adam Presser up to head of operations, according to an email reportedly seen by Gizmodo, while former Disney executive Zenia Mucha joins as TikTok’s new Chief Brand and Communications Officer. 

In March, TikTok began blocking links to app stores in creators’ bios, according to a TechCrunch report, which reportedly also extended to third-party link-in-bio solutions like Linktree.

In other news, read about Roblox and its invitation to gamers to build mature experiences catered to 17+.

Click here to view full gallery at Hypemoon
Web3 Flagship Game Gods Unchained, Launches on Epic Games StoreGods Unchained, the highly successful trading card game (TCG) built on the Ethereum blockchain, has achieved a significant milestone by launching on the Epic Games Store, one of the largest digital distribution platforms for PC gaming. "It is hard to overestimate the significance of Gods Unchained's launch on Epic Games Store, one of the largest PC gaming platforms in the world," said Daniel Paez, Executive Producer of Gods Unchained. The availability of Gods Unchained on the Epic Games Store opens up a vast audience of over 230 million PC gamers worldwide. This newfound accessibility and exposure will undoubtedly boost the game's visibility and attract a diverse player base of both traditional PC gamers and TCG enthusiasts. It marks a natural progression for Gods Unchained, fulfilling the promise made to its community to expand its reach and appeal. ? ?? ???? ???? ? Mortals, Gods Unchained is cutting through to mainstream TCG and strategy gamers with a razor-sharp new look ?️? First stop: Epic Games Store ? https://t.co/b41gDDuuUh pic.twitter.com/5sxAiOsbaM — Gods Unchained (@GodsUnchained) June 22, 2023 In addition, Gods Unchained is preparing to have a soft launch on both the Google Play Store and Apple App Store in the latter half of 2023. As a part of this transition, the game's pack opening procedure has been revamped, shifting from a video-based pack opening experience to a WebGl-based one. The developers of Gods Unchained are also focusing on improving the user experience by streamlining gameplay, implementing quality-of-life enhancements, and optimizing the pack opening mechanic for a smoother experience on all devices, including mobile. Epic Games is a leading video game developer and publisher known for creating popular titles like Fortnite and Unreal Engine. The gaming giant has continued to push forward in the world of web3 and blockchain gaming , just last week LVMH announced their partnership with Epic Games in hopes to bring customers exciting new experiences including virtual fitting rooms, fashion shows, augmented reality (AR), product digital twins, and more. The expansion of Gods Unchained from a web3-native title to a mainstream TCG signifies a significant step forward for the blockchain gaming industry. It appeals to blockchain enthusiasts and attracts traditional gamers who may be experiencing blockchain-powered gaming for the first time. This move further strengthens the credibility and adoption of blockchain technology within the gaming industry. In other gaming news, Roblox Invites Gamers to Build Mature Experiences Catered to Users 17+ Click here to view full gallery at Hypemoon

Web3 Flagship Game Gods Unchained, Launches on Epic Games Store

Gods Unchained, the highly successful trading card game (TCG) built on the Ethereum blockchain, has achieved a significant milestone by launching on the Epic Games Store, one of the largest digital distribution platforms for PC gaming.

"It is hard to overestimate the significance of Gods Unchained's launch on Epic Games Store, one of the largest PC gaming platforms in the world," said Daniel Paez, Executive Producer of Gods Unchained.

The availability of Gods Unchained on the Epic Games Store opens up a vast audience of over 230 million PC gamers worldwide. This newfound accessibility and exposure will undoubtedly boost the game's visibility and attract a diverse player base of both traditional PC gamers and TCG enthusiasts. It marks a natural progression for Gods Unchained, fulfilling the promise made to its community to expand its reach and appeal.

? ?? ???? ???? ?

Mortals, Gods Unchained is cutting through to mainstream TCG and strategy gamers with a razor-sharp new look ?️?

First stop: Epic Games Store

? https://t.co/b41gDDuuUh pic.twitter.com/5sxAiOsbaM

— Gods Unchained (@GodsUnchained) June 22, 2023

In addition, Gods Unchained is preparing to have a soft launch on both the Google Play Store and Apple App Store in the latter half of 2023. As a part of this transition, the game's pack opening procedure has been revamped, shifting from a video-based pack opening experience to a WebGl-based one.

The developers of Gods Unchained are also focusing on improving the user experience by streamlining gameplay, implementing quality-of-life enhancements, and optimizing the pack opening mechanic for a smoother experience on all devices, including mobile.

Epic Games is a leading video game developer and publisher known for creating popular titles like Fortnite and Unreal Engine. The gaming giant has continued to push forward in the world of web3 and blockchain gaming , just last week LVMH announced their partnership with Epic Games in hopes to bring customers exciting new experiences including virtual fitting rooms, fashion shows, augmented reality (AR), product digital twins, and more.

The expansion of Gods Unchained from a web3-native title to a mainstream TCG signifies a significant step forward for the blockchain gaming industry. It appeals to blockchain enthusiasts and attracts traditional gamers who may be experiencing blockchain-powered gaming for the first time. This move further strengthens the credibility and adoption of blockchain technology within the gaming industry.

In other gaming news, Roblox Invites Gamers to Build Mature Experiences Catered to Users 17+

Click here to view full gallery at Hypemoon
Roblox Invites Gamers to Build Mature Experiences Catered to Users 17+Roblox, the popular online gaming platform, has recently made an exciting announcement that aims to broaden its horizons and cater to a more mature audience. In a move signaling the platform's commitment to inclusivity and diverse content, Roblox has invited its creative community to build and share mature experiences specifically designed for users aged 17 and older. This new initiative marks a significant step forward for Roblox, as it acknowledges the platform's growing user base and their evolving preferences. By embracing a broader range of content, Roblox is actively fostering an environment that meets the needs and desires of its diverse community members, ensuring they can find engaging experiences that resonate with their interests. The Gaming Landscape is Changing  One key aspect of Roblox's decision to welcome mature experiences is the recognition that its user base is not limited to younger players alone. Over the years, the platform has witnessed an increase in the number of older users, including young adults and even parents who enjoy exploring virtual worlds and engaging in immersive gameplay. "The fastest-growing age group on Roblox is 17-to-24-year-olds and in 2022, 38% of our daily active users were 17 and over. As a natural evolution, we’re now allowing creators to make content specifically for this audience. " Roblox shared in their statement.  In 2022, multiple reports and headlines were indicating an increase in adult gamers surpassing kids. According to a report commissioned by the Entertainment Software Association, an organization representing the U.S. video game industry, the current average age of video game players is 33 years old. This information highlights the demographic trends among individuals engaging with video games today. This move by Roblox opens up opportunities for creators within its community. Talented individuals can now unleash their creativity by developing experiences tailored to the interests and sensibilities of an older demographic. Expanding content allows creators to explore new themes, narratives, and gameplay mechanics that resonate with a more mature audience. It encourages the development of complex storylines, challenging gameplay, and sophisticated social interactions, providing users aged 17 and older with a broader range of experiences to enjoy on the platform. Inviting mature experiences also aligns with Roblox's ongoing commitment to maintaining a safe and responsible environment. With stringent moderation systems in place, Roblox ensures that the mature experiences adhere to appropriate guidelines and meet the standards of inclusivity and user safety. This responsible approach to content creation enables users to enjoy immersive and engaging experiences without compromising their well-being. In other metaverse gaming news, Fortnite and Nike’s Air Max IP Merge With New ‘Airphoria’ Experience Click here to view full gallery at Hypemoon

Roblox Invites Gamers to Build Mature Experiences Catered to Users 17+

Roblox, the popular online gaming platform, has recently made an exciting announcement that aims to broaden its horizons and cater to a more mature audience. In a move signaling the platform's commitment to inclusivity and diverse content, Roblox has invited its creative community to build and share mature experiences specifically designed for users aged 17 and older.

This new initiative marks a significant step forward for Roblox, as it acknowledges the platform's growing user base and their evolving preferences. By embracing a broader range of content, Roblox is actively fostering an environment that meets the needs and desires of its diverse community members, ensuring they can find engaging experiences that resonate with their interests.

The Gaming Landscape is Changing 

One key aspect of Roblox's decision to welcome mature experiences is the recognition that its user base is not limited to younger players alone. Over the years, the platform has witnessed an increase in the number of older users, including young adults and even parents who enjoy exploring virtual worlds and engaging in immersive gameplay.

"The fastest-growing age group on Roblox is 17-to-24-year-olds and in 2022, 38% of our daily active users were 17 and over. As a natural evolution, we’re now allowing creators to make content specifically for this audience. " Roblox shared in their statement. 

In 2022, multiple reports and headlines were indicating an increase in adult gamers surpassing kids. According to a report commissioned by the Entertainment Software Association, an organization representing the U.S. video game industry, the current average age of video game players is 33 years old. This information highlights the demographic trends among individuals engaging with video games today.

This move by Roblox opens up opportunities for creators within its community. Talented individuals can now unleash their creativity by developing experiences tailored to the interests and sensibilities of an older demographic. Expanding content allows creators to explore new themes, narratives, and gameplay mechanics that resonate with a more mature audience. It encourages the development of complex storylines, challenging gameplay, and sophisticated social interactions, providing users aged 17 and older with a broader range of experiences to enjoy on the platform.

Inviting mature experiences also aligns with Roblox's ongoing commitment to maintaining a safe and responsible environment. With stringent moderation systems in place, Roblox ensures that the mature experiences adhere to appropriate guidelines and meet the standards of inclusivity and user safety. This responsible approach to content creation enables users to enjoy immersive and engaging experiences without compromising their well-being.

In other metaverse gaming news, Fortnite and Nike’s Air Max IP Merge With New ‘Airphoria’ Experience

Click here to view full gallery at Hypemoon
Slim Jim Debuts “MEATAVERSE” – Its First-Ever Digital Membership ClubFirst created in the 1940s to nourish Pennsylvania bar patrons, Slim Jim beef jerky arose as an alternative to snacking on pepperoni in public, eventually becoming synonymous with road trip culture and a nationwide favorite at gas stations across the country.  More than 70 years later, Slim Jim is still here and still relevant.  On Tuesday, the snack brand announced its foray into Web3 by announcing the launch of its “Meataverse” – its first-ever digital membership club and digital collectible experience that is free to all who want to participate. It's 11:69 with a forecast of MEAT so we bring you the MEATAVERSEhttps://t.co/K8YEJlVB8a pic.twitter.com/MgX7G6xP0l — Slim Jim ? MEATA (@SlimJim) June 20, 2023 “The cornerstone of our Metaverse is a rich offering of 10,000 unique, dynamic digital collectibles (NFTs) that can be evolved by scanning products and engaging with the brand on social media,” the website reads in part.  The NFTs, known as “GigaJims,” were made available on a first-come first-serve basis, free to mint on Polygon.  It All Starts With M.E.A.T. Inspired by Slim Jim’s brand mythology and lore built up over five years on social media, its “Meaterializer” feature that “is suddenly back online after laying dormant for over 30 years.” Only by combining the Slim Jim Meat Stick with the base S.A.U.C.E. left behind by Dr. Slender James, can fans access the Metaverse for the first-time and summon their very own GigaJim. In 1969, Brilliant theoretical mathematician Dr. “Slender” James is recruited by Slim Jim Industries to head their experimental Mechanics Engineering and Technology (“M.E.A.T.”) initiative.  Five years later, an exploration of Meta Meat Mechanics was published, containing Dr. James’s proposal of “Quantum Stick Theory” and the existence of “alternate meat dimensions” which he characterizes as a “metaverse.” S.A.U.C.E. During his research, he discovers “S.A.U.C.E.”, powerfully excited subatomic particles which catalyze extraordinary transformative effects in meat that stands for “savagely augmented user controlled energy” that serves as Slim Jim’s native digital currency.  Wanting to harness the power of S.A.U.C.E. and prove his theories correct, Dr. James devotes the next decade to engineering the Meaterializer – an interdimensional device that makes passage between digital, physical, and metaphysical worlds possible.  However, while conducting a preliminary safety check of the device, Dr. James mysteriously vanishes. Feared lost forever, James’ experiments are shut down and his lab decommissioned.  The Long Boi Gang A group of passionate Slim Jim fans calling themselves “The Long Boi Gang” uncover archival information of Dr. James’ research online, joining together to discover the whereabouts of Dr. James.  By synthesizing complex data scraped from the far reaches of the internet and Slim Jim’s comments on social media, The Long Boi Gang is able to get a lock on Dr. James’ location and slide into his virtual DMs – reactivating the Materializer.  To evolve the GigaJims, The Long Boi Gang just needs to scan Slim Jim products and interact with the brand on Discord to collect S.A.U.C.E. from IRL to dial into new experiences and benefits from the snack brand.  While there isn’t much explained in the brand’s roadmap on its website, it does state that it will be releasing additional enhancements and features over the next seasons.  In other news, read about Fortnite and Nike’s collaboration that merges Air Max IP. Click here to view full gallery at Hypemoon

Slim Jim Debuts “MEATAVERSE” – Its First-Ever Digital Membership Club

First created in the 1940s to nourish Pennsylvania bar patrons, Slim Jim beef jerky arose as an alternative to snacking on pepperoni in public, eventually becoming synonymous with road trip culture and a nationwide favorite at gas stations across the country. 

More than 70 years later, Slim Jim is still here and still relevant. 

On Tuesday, the snack brand announced its foray into Web3 by announcing the launch of its “Meataverse” – its first-ever digital membership club and digital collectible experience that is free to all who want to participate.

It's 11:69 with a forecast of MEAT so we bring you the MEATAVERSEhttps://t.co/K8YEJlVB8a pic.twitter.com/MgX7G6xP0l

— Slim Jim ? MEATA (@SlimJim) June 20, 2023

“The cornerstone of our Metaverse is a rich offering of 10,000 unique, dynamic digital collectibles (NFTs) that can be evolved by scanning products and engaging with the brand on social media,” the website reads in part. 

The NFTs, known as “GigaJims,” were made available on a first-come first-serve basis, free to mint on Polygon. 

It All Starts With M.E.A.T.

Inspired by Slim Jim’s brand mythology and lore built up over five years on social media, its “Meaterializer” feature that “is suddenly back online after laying dormant for over 30 years.”

Only by combining the Slim Jim Meat Stick with the base S.A.U.C.E. left behind by Dr. Slender James, can fans access the Metaverse for the first-time and summon their very own GigaJim.

In 1969, Brilliant theoretical mathematician Dr. “Slender” James is recruited by Slim Jim Industries to head their experimental Mechanics Engineering and Technology (“M.E.A.T.”) initiative. 

Five years later, an exploration of Meta Meat Mechanics was published, containing Dr. James’s proposal of “Quantum Stick Theory” and the existence of “alternate meat dimensions” which he characterizes as a “metaverse.”

S.A.U.C.E.

During his research, he discovers “S.A.U.C.E.”, powerfully excited subatomic particles which catalyze extraordinary transformative effects in meat that stands for “savagely augmented user controlled energy” that serves as Slim Jim’s native digital currency. 

Wanting to harness the power of S.A.U.C.E. and prove his theories correct, Dr. James devotes the next decade to engineering the Meaterializer – an interdimensional device that makes passage between digital, physical, and metaphysical worlds possible. 

However, while conducting a preliminary safety check of the device, Dr. James mysteriously vanishes. Feared lost forever, James’ experiments are shut down and his lab decommissioned. 

The Long Boi Gang

A group of passionate Slim Jim fans calling themselves “The Long Boi Gang” uncover archival information of Dr. James’ research online, joining together to discover the whereabouts of Dr. James. 

By synthesizing complex data scraped from the far reaches of the internet and Slim Jim’s comments on social media, The Long Boi Gang is able to get a lock on Dr. James’ location and slide into his virtual DMs – reactivating the Materializer. 

To evolve the GigaJims, The Long Boi Gang just needs to scan Slim Jim products and interact with the brand on Discord to collect S.A.U.C.E. from IRL to dial into new experiences and benefits from the snack brand. 

While there isn’t much explained in the brand’s roadmap on its website, it does state that it will be releasing additional enhancements and features over the next seasons. 

In other news, read about Fortnite and Nike’s collaboration that merges Air Max IP.

Click here to view full gallery at Hypemoon
OpenAI Now Has Its First Defamation Lawsuit After Spitting Out a Case That Made Up New Facts“While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.” – OpenAI’s opening disclaimer And that brings us to the heart of our biggest fear – what happens when technology turns against us?  What happens when technology is prematurely deployed without the proper testing and knowledge behind its capabilities? Earlier this month, OpenAI, the world’s most talked about artificial intelligence (AI) company, just got served with its first-ever defamation lawsuit that further showcases the dangers of ChatGPT’s unchecked ability to generate results that have no factual backing or legal backing. Mark Walters, a nationally syndicated radio host in Georgia, filed his lawsuit against OpenAI on June 5 alleging that its AI-powered chatbot, ChatGPT, fabricated legal claims against him.  The 13-page Complaint references AmmoLand.com journalist Fred Riehl and his May 4 request to ChatGPT to summarize the legal case of Second Amendment Foundation v. Ferguson, a federal case filed in Washington federal court that accused the state’s Attorney General Bob Ferguson of abusing his power by chilling the activities of the gun rights foundation and provided the OpenAI chatbot with a link to the lawsuit.  While Walter was not named in that original lawsuit, ChatGPT responded to Riehl’s summary request of Second Amendment Foundation, stating that it was: “...a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF.” But here’s where things get distorted and dangerous – none of ChatGPT’s statements concerning Walters are in the actual SAF complaint.  This AI-generated “complaint” also alleged that Walters, who served as the organization’s treasurer and chief financial officer, “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership.” As a form of relief, the plaintiff allegedly was seeking “the recovery of the misappropriated funds, damages for breach of fiduciary duty and fraud, and Walter’s removal from his position as a member of the SAF’s board of directors.” However, herein lies the problem – according to Walters, “[e]very statement of fact in the [ChatGPT] summary pertaining to [him] is false” where OpenAI’s chatbot went so far as to even create “an erroneous case number.” “ChatGPT’s allegations concerning Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule,” the lawsuit states. “By sending the allegations to Riehl, [OpenAI] published libelous matter regarding Walters.” If you were to ask ChatGPT to provide a summary of SAF’s lawsuit that was cited in Walters’ complaint, you may also get a response similar to this: “I apologize, but as an AI language model, my responses are based on pre-existing knowledge up until September 2021. Therefore, I cannot access or browse the internet or view specific documents or links that were published after my knowledge cutoff. Consequently, I’m unable to provide you with a summary of the accusations in the lawsuit you mentioned…[t]o get information about the lawsuit and its accusations, I recommend reviewing the document yourself or referring to trusted news sources or legal websites that may have covered the case. They can provide you with accurate and up-to-date information regarding the specific lawsuit you mentioned.” While OpenAI hasn’t responded to any comments on Walters’ ongoing defamation lawsuit, it does beg the question of why the AI company isn’t pressing harder on these arguably foreseeable consequences of a code that was in retrospect, negligently deployed without the proper testing. The case is Mark Walters v. OpenAI, LLC, cv-23-A-04860-2. You can read Walter’s June 5 complaint here.  In other news, read about US President Joe Biden meeting with 8 tech leaders in addressing AI bias and workforce benefits. Click here to view full gallery at Hypemoon

OpenAI Now Has Its First Defamation Lawsuit After Spitting Out a Case That Made Up New Facts

“While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”

– OpenAI’s opening disclaimer

And that brings us to the heart of our biggest fear – what happens when technology turns against us? 

What happens when technology is prematurely deployed without the proper testing and knowledge behind its capabilities?

Earlier this month, OpenAI, the world’s most talked about artificial intelligence (AI) company, just got served with its first-ever defamation lawsuit that further showcases the dangers of ChatGPT’s unchecked ability to generate results that have no factual backing or legal backing.

Mark Walters, a nationally syndicated radio host in Georgia, filed his lawsuit against OpenAI on June 5 alleging that its AI-powered chatbot, ChatGPT, fabricated legal claims against him. 

The 13-page Complaint references AmmoLand.com journalist Fred Riehl and his May 4 request to ChatGPT to summarize the legal case of Second Amendment Foundation v. Ferguson, a federal case filed in Washington federal court that accused the state’s Attorney General Bob Ferguson of abusing his power by chilling the activities of the gun rights foundation and provided the OpenAI chatbot with a link to the lawsuit. 

While Walter was not named in that original lawsuit, ChatGPT responded to Riehl’s summary request of Second Amendment Foundation, stating that it was:

“...a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF.”

But here’s where things get distorted and dangerous – none of ChatGPT’s statements concerning Walters are in the actual SAF complaint. 

This AI-generated “complaint” also alleged that Walters, who served as the organization’s treasurer and chief financial officer, “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership.”

As a form of relief, the plaintiff allegedly was seeking “the recovery of the misappropriated funds, damages for breach of fiduciary duty and fraud, and Walter’s removal from his position as a member of the SAF’s board of directors.”

However, herein lies the problem – according to Walters, “[e]very statement of fact in the [ChatGPT] summary pertaining to [him] is false” where OpenAI’s chatbot went so far as to even create “an erroneous case number.”

“ChatGPT’s allegations concerning Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule,” the lawsuit states. “By sending the allegations to Riehl, [OpenAI] published libelous matter regarding Walters.”

If you were to ask ChatGPT to provide a summary of SAF’s lawsuit that was cited in Walters’ complaint, you may also get a response similar to this:

“I apologize, but as an AI language model, my responses are based on pre-existing knowledge up until September 2021. Therefore, I cannot access or browse the internet or view specific documents or links that were published after my knowledge cutoff. Consequently, I’m unable to provide you with a summary of the accusations in the lawsuit you mentioned…[t]o get information about the lawsuit and its accusations, I recommend reviewing the document yourself or referring to trusted news sources or legal websites that may have covered the case. They can provide you with accurate and up-to-date information regarding the specific lawsuit you mentioned.”

While OpenAI hasn’t responded to any comments on Walters’ ongoing defamation lawsuit, it does beg the question of why the AI company isn’t pressing harder on these arguably foreseeable consequences of a code that was in retrospect, negligently deployed without the proper testing.

The case is Mark Walters v. OpenAI, LLC, cv-23-A-04860-2.

You can read Walter’s June 5 complaint here. 

In other news, read about US President Joe Biden meeting with 8 tech leaders in addressing AI bias and workforce benefits.

Click here to view full gallery at Hypemoon
Fortnite and Nike’s Air Max IP Merge With New ‘Airphoria’ ExperienceBuilding upon its multi-year partnership with Epic Games, Nike announced the launch of its newest immersive gaming experience, Airphoria, on Tuesday that operates within Fortnite.  This collaboration merges Nike’s iconic Air Max brand and IP with Fortnite’s immersive world and story building, powered by Epic Games’ Unreal Editor for Fortnite.  “Enter the world of Airphoria, a beautiful and extraordinary fusion of cutting-edge design and unparalleled creativity,” the press release reads, adding that the new experience will enable players to engage with Air Max sneakers in a new way as sneakerheads embark on “the ultimate sneaker hunt.” “Airphoria represents a new, immersive experience for Nike as it amplifies its efforts in gaming and virtual products." said Ron Faris, VP/GM of Nike Virtual Studios. Faris says that this partnership is another opportunity for the brand to “seek authentic ways” to deepen its connection with fans and bring consumers into Nike’s digital ecosystem.  “What’s more, Nike is one of the first brands to use Epic Games’ Unreal Editor for Fortnite to build Airphoria, paving the way for a continued partnership that will further unlock the future of gaming,” he added.  The blood running through Airphoria’s ecosystem is fueled by five iconic Air Max Grails – Air Max 1 OG, Air Max 97, Air Max TW, Air Max Scorpion, and Air Max Pulse – al suspended in the air above the city.  These Air Max Grails, according to Tuesday’s announcement, represent pivotal moments in Air Max history, allowing the city to exist in the ‘Air State’ – or what Nike describes as “the purest form of imagination and creativity.” As for the experience itself, fans start their journey when “Maxxed out Max” dispatches his Sneaker Drones to seize the Air Max Grails from Airphoria, but Airie, the defender of the Air Max Grails, scatters the sneakers throughout Airphoria’s city down below, causing Airphoria to lose its power. It’s up to the players to successfully return the Air Max Grails back to their rightful place above the city.  As part of today’s launch, Fortnite players are able to purchase the Airie and Maxxed Out Max Outfits in Fornite’s Item Shop, with a limited Airphoria inspired collection also dropping on Nike.com.  From June 20 to June 27, players can access Airphoria island through Fortnite Discover or the island code 2118-5342-7190. “Fortnite continues to be a primary destination for new cultural moments in entertainment, fashion, and sport, and we’re proud to launch Airphoria alongside Nike,” says Nate Nanzer, VP of Global Partnerships at Epic Games. “We know from past activations with Nike that players love the partnership, and Airphoria’s immersion, beauty and storytelling about the Air Max brand take things to a new level. Airphoria demonstrates what’s possible when next-gen tools like UEFN are used by an iconic brand like Nike and creators in the Fortnite community to build immersive worlds together.” As for what’s happening in Nike’s direct ecosystem, its Web3 platform, .SWOOSH, is still in closed beta, but the public is encouraged to register to become a .SWOOSH member at welcome.swoosh.nike. In other news, read about LVMH and its partnership with Epic Games and Apple. Click here to view full gallery at Hypemoon

Fortnite and Nike’s Air Max IP Merge With New ‘Airphoria’ Experience

Building upon its multi-year partnership with Epic Games, Nike announced the launch of its newest immersive gaming experience, Airphoria, on Tuesday that operates within Fortnite. 

This collaboration merges Nike’s iconic Air Max brand and IP with Fortnite’s immersive world and story building, powered by Epic Games’ Unreal Editor for Fortnite. 

“Enter the world of Airphoria, a beautiful and extraordinary fusion of cutting-edge design and unparalleled creativity,” the press release reads, adding that the new experience will enable players to engage with Air Max sneakers in a new way as sneakerheads embark on “the ultimate sneaker hunt.”

“Airphoria represents a new, immersive experience for Nike as it amplifies its efforts in gaming and virtual products." said Ron Faris, VP/GM of Nike Virtual Studios. Faris says that this partnership is another opportunity for the brand to “seek authentic ways” to deepen its connection with fans and bring consumers into Nike’s digital ecosystem. 

“What’s more, Nike is one of the first brands to use Epic Games’ Unreal Editor for Fortnite to build Airphoria, paving the way for a continued partnership that will further unlock the future of gaming,” he added. 

The blood running through Airphoria’s ecosystem is fueled by five iconic Air Max Grails – Air Max 1 OG, Air Max 97, Air Max TW, Air Max Scorpion, and Air Max Pulse – al suspended in the air above the city. 

These Air Max Grails, according to Tuesday’s announcement, represent pivotal moments in Air Max history, allowing the city to exist in the ‘Air State’ – or what Nike describes as “the purest form of imagination and creativity.”

As for the experience itself, fans start their journey when “Maxxed out Max” dispatches his Sneaker Drones to seize the Air Max Grails from Airphoria, but Airie, the defender of the Air Max Grails, scatters the sneakers throughout Airphoria’s city down below, causing Airphoria to lose its power. It’s up to the players to successfully return the Air Max Grails back to their rightful place above the city. 

As part of today’s launch, Fortnite players are able to purchase the Airie and Maxxed Out Max Outfits in Fornite’s Item Shop, with a limited Airphoria inspired collection also dropping on Nike.com. 

From June 20 to June 27, players can access Airphoria island through Fortnite Discover or the island code 2118-5342-7190.

“Fortnite continues to be a primary destination for new cultural moments in entertainment, fashion, and sport, and we’re proud to launch Airphoria alongside Nike,” says Nate Nanzer, VP of Global Partnerships at Epic Games. “We know from past activations with Nike that players love the partnership, and Airphoria’s immersion, beauty and storytelling about the Air Max brand take things to a new level. Airphoria demonstrates what’s possible when next-gen tools like UEFN are used by an iconic brand like Nike and creators in the Fortnite community to build immersive worlds together.”

As for what’s happening in Nike’s direct ecosystem, its Web3 platform, .SWOOSH, is still in closed beta, but the public is encouraged to register to become a .SWOOSH member at welcome.swoosh.nike.

In other news, read about LVMH and its partnership with Epic Games and Apple.

Click here to view full gallery at Hypemoon
US President Joe Biden to Meet With 8 AI Experts on Best Practices, Addressing Bias and Workforce...US President Joe Biden is scheduled to meet with eight business leaders in San Francisco on artificial intelligence (AI) on Tuesday as the administration pushes for a better understanding of the technology and the proper safety and privacy protections that it carries.  The heart of the meeting will center around the current challenges posed by AI on the workforce and children, the harm from AI bias, and potential benefits it carries for both education and medicine.  Those participating in the conversations include: Sal Khan, CEO of Khan Academy Inc; Jim Steyer, CEO of Common Sense Media; Tristan Harris, Executive Director of the Center for Humane Technology; Oren Etzioni, former CEO of the Allen Institute for Artificial Intelligence; Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute; Joy Buolamwini, founder of the Algorithmic Justice League; Jennifer Doudna, professor of chemistry at the University of California, Berkeley; and Rob Reich, political science professor at Stanford University Last month, Biden and Vice President Harris met with the heads of Google, Microsoft, OpenAI, and Anthropic at the White House to discuss best practices, while simultaneously announcing an investment by the Biden administration of $140 million USD to establish seven new AI research institutes.  According to a White House official, the White House Chief of Staff Jeff Zients is currently overseeing efforts to develop additional steps the Biden administration can take on AI in the coming weeks.  Earlier this month, Zients said that AI companies are working with the administration to unveil privacy and security commitments in the near future, but provided very little context.  The broad regulatory push has been exacerbated by other countries, including the European Union (EU) who are already in the works of passing what is considered to be the world’s first global regulatory framework on AI.  Last week, the EU took its first major step with the European Parliament passing a draft law known as the “A.I. Act,” which was first proposed in April 2021. While the initial draft came prior to the surge of generative AI, including chatbots, the new draft takes these into account, along with the implications they bring.  Unfortunately, one of Biden’s top AI advisers, Alexander Macgillivray, who helped write the president’s proposal for an AI Bill of Rights, left the administration on June 8.  Today is my last day in the glorious EEOB. It was a huge privilege to get to work here again as part of the Biden Administration. I am extremely grateful and more than a little sad that my time is up.1/2 pic.twitter.com/jg1JqYgKxW — Alexander Macgillivray (@amac46) June 8, 2023 Ahead of last month’s meeting, companies including Microsoft and Google committed to participating in the first independent public evaluation of their systems, according to Bloomberg.  The Commerce Department also said earlier this year that it was considering rules that could require AI models to go through a certification process before release. In other news, read about PassGPT, an AI that is trained on minimizing password leaking. Click here to view full gallery at Hypemoon

US President Joe Biden to Meet With 8 AI Experts on Best Practices, Addressing Bias and Workforce...

US President Joe Biden is scheduled to meet with eight business leaders in San Francisco on artificial intelligence (AI) on Tuesday as the administration pushes for a better understanding of the technology and the proper safety and privacy protections that it carries. 

The heart of the meeting will center around the current challenges posed by AI on the workforce and children, the harm from AI bias, and potential benefits it carries for both education and medicine. 

Those participating in the conversations include:

Sal Khan, CEO of Khan Academy Inc;

Jim Steyer, CEO of Common Sense Media;

Tristan Harris, Executive Director of the Center for Humane Technology;

Oren Etzioni, former CEO of the Allen Institute for Artificial Intelligence;

Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute;

Joy Buolamwini, founder of the Algorithmic Justice League;

Jennifer Doudna, professor of chemistry at the University of California, Berkeley; and

Rob Reich, political science professor at Stanford University

Last month, Biden and Vice President Harris met with the heads of Google, Microsoft, OpenAI, and Anthropic at the White House to discuss best practices, while simultaneously announcing an investment by the Biden administration of $140 million USD to establish seven new AI research institutes. 

According to a White House official, the White House Chief of Staff Jeff Zients is currently overseeing efforts to develop additional steps the Biden administration can take on AI in the coming weeks. 

Earlier this month, Zients said that AI companies are working with the administration to unveil privacy and security commitments in the near future, but provided very little context. 

The broad regulatory push has been exacerbated by other countries, including the European Union (EU) who are already in the works of passing what is considered to be the world’s first global regulatory framework on AI. 

Last week, the EU took its first major step with the European Parliament passing a draft law known as the “A.I. Act,” which was first proposed in April 2021. While the initial draft came prior to the surge of generative AI, including chatbots, the new draft takes these into account, along with the implications they bring. 

Unfortunately, one of Biden’s top AI advisers, Alexander Macgillivray, who helped write the president’s proposal for an AI Bill of Rights, left the administration on June 8. 

Today is my last day in the glorious EEOB. It was a huge privilege to get to work here again as part of the Biden Administration. I am extremely grateful and more than a little sad that my time is up.1/2 pic.twitter.com/jg1JqYgKxW

— Alexander Macgillivray (@amac46) June 8, 2023

Ahead of last month’s meeting, companies including Microsoft and Google committed to participating in the first independent public evaluation of their systems, according to Bloomberg. 

The Commerce Department also said earlier this year that it was considering rules that could require AI models to go through a certification process before release.

In other news, read about PassGPT, an AI that is trained on minimizing password leaking.

Click here to view full gallery at Hypemoon
Chester Charles: the Lost Grand Master, an Alt-History Exploration of Queer Art By ClownVampOn Wednesday, June 21, artist ClownVamp (CV) will share his first solo exhibition with the world, through a physical showing taking place at The Oculus at the World Trade Center in New York City -- curated by SuperRare and powered by TransientLabs. CV shared that Chester Charles: The Lost Grand Master is an immersive artificial intelligence (AI) driven, alternate history, storytelling experience, that explores the story of self-censorship in historical queer art through the lens of his protagonist, Chester Charles. To learn more about the artist's inspirations and process, as well as the history and future of queer art, Hypemoon spoke with CV, who expressed that most of our reality is just a curated version of the truth but that AI could help expand or challenge perception. CHESTER CHARLES: The Lost Grand Master. Announcing my first-ever solo show. ?‍♂️ An immersive AI-driven story. June 21st at The Oculus - World Trade Center Produced by @SuperRare and Powered by @TransientLabs. 10 months of work, culminating. ? A thread... pic.twitter.com/u3oUEcbYRT — ClownVamp (@ClownVamp) June 7, 2023 Conversation with ClownVamp "We are living in a curated version of the truth every time we walk through a museum," - ClownVamp Sharing the goal of the exhibition, CV said he wanted a show that would challenge people's perceptions of history and the accuracy of what they've been taught. He explained that "Instead, we are living in a curated version of the truth every time we walk through a museum." To achieve this effect, CV shared that he "wanted to reflect back a fiction that rhymed with the truth. As a result, there are a lot of details intended to be strong facsimiles of what an actual retrospective would look like. The art is incredibly high resolution with brush and canvas details, each piece comes with the text of what would be on a museum label, and there was an inscription 'found' on the back of each piece." The works in question are created by CV's alt-history protagonist, Chester Charles, who is described as "a lost impressionist painter, a man who wandered life with insatiable curiosity, a man whose work was self-censored and hidden from history -- a gay man. Inspirations, Motivations, Process "I often work on random AI experiments. Sometimes because I want to explore a feeling. Sometimes to try a new tool. Sometimes just because," CV shared. Explaining how he landed on the concept of an alt-history retrospective, CV said that "Last year, I was creating some AI explorations around the concepts of fatherhood, trying to prompt a father-son walking in the woods, in a historical art style. The AI models at the time were very prone to twinning where they would replicate the subjects in your prompt," adding that "As a result, the AI model created a scene of two dads and a son. Seeing this result on the screen created this instant stir in me." The generated scene ushered in a sort of revelation for CV who said "I wasn’t used to seeing gay scenes, let alone scenes of gay parenting, in historical art. As a gay man, I didn’t realize just how much I missed that when walking through museum galleries until that moment." CV shared that he was lucky to have a long time to work on the project, allowing him to create a "mental glitch effect" that would have the audience questioning the existence of the work in legitimate history. "Over time, I iterated on the idea and got lots of feedback from smart friends including Chris from Transient and Mika from SuperRare, eventually what became clear was that the best way to tell the story was to tell it from the perspective of a fictional artist, Chester Charles," said CV. He further explained that "Much of the art canon is told through an academic lens of analysis, evaluating careers and the underlying aesthetic and biographical changes. The idea here is to use that familiar paradigm as a jumping-off point for a story." Alt-History & Self-Censorship For ClownVamp, AI represents the "ultimate remix machine," he shared that "It allows you to mix and mash styles in a way that is perfect for this sort of experiment. While the training data for these AI models doesn’t have historical queer art, it does have queer art and it does have historical art. As a result, these models can allow you to reconfigure the past by bringing these concepts together." While much of the performative art show is created as an alt-history, some aspects are historically accurate or at the very least nod to early queer art. Examples of this are found in Jim Van Buskirk's article that observes the "Queer Impressions of Gustave Caillebotte," an 18th Century French Impressionist painter whose works often focused on the male form, depicting them in submissive forms and in the case of 'Man at his Bath,' completely nude from the perspective of the male gaze. Other works like Boating Party [Oarsman in a Top Hat] see what might be considered to be "coded" queer art -- where the overall composition is nothing out of the ordinary but the focal point brings viewers' eyes to the crotch of the oarsman. Buskirk shared that "My aim here is not one of reductivism. It is not for me to determine whether or not Gustave Caillebotte might have been homosexual -- whatever that means -- but I do wonder why art historians are failing to ask questions in an effort to illuminate aspects which obviously distinguish his work from that of his contemporaries." He added, "I would hope that future examinations of Caillebotte’s oeuvre include an exploration of the 'queer' gaze so abundantly evident in his work." Back to the conversation with CV, he explained that "When we think about art that doesn’t exist, or at least that we don’t have a record of. It goes deeper than just what wasn’t celebrated, or what wasn’t collected. It also goes to what wasn’t even made. Self-censorship is the way that queer people have often had to navigate their environments." "Say the wrong thing and suffer consequences ranging from shame to literal death. The result is that there is a massive amount of human potential that was never even expressed. What could have been? What would have been? What should have been?" the artist put forward, explaining the significance of AI as a tool to further explore these questions. Question Reality "My goal is for the viewer to realize that what we think of as deeply factual or even academic, is really a flawed reality," shared CV adding that "It represents just what was allowed to be said, let alone recorded for posterity." He further explained that "By showing an altered version of reality, my aim is for people to confront this. The show is meant to both transport you to a different time, but also feel unsettling at the same time. I want you to start to question the realities you face every day, and to think about how are some of these constraints still around today." As CV looked to make the alt-history work as believable as possible, he mentioned that paint textures were a crucial aspect and explained that "One of the biggest breakthroughs was figuring out how to nail the painting textures I was looking for, [which he did] thanks to some great tips from friend and fellow artist Henry Daubrez and way too much time experimenting." "The goal of the show is to make you question reality and so making it feel as close to real as possible was essential. I want you to smell the canvas when you look at the painting, especially when they’re blown up on the large screens of the physical show," he said. Speaking on his partners, TransientLabs and SuperRare, CV said that "Chris Ostoich [COO] has been a lover and collector of AI work since before he joined Transient and has been a phenomenal brainstorming partner for how we can best leverage technology, narrative, and aesthetics to all work together." As for the curation team from SuperRare, CV said that "Mika, Linda, and the SuperRare team have been phenomenal. I had thousands of pieces to pick from and they helped me craft the ideal aesthetic narrative that would also work on a storytelling level." Those interested in viewing the solo exhibition, powered by TransientLabs and Curated by SuperRare will be able to attend both on-chain and in-person viewings starting June 21. A total of 23 pieces will be shown, with three to be auctioned on SuperRare starting at a 1 ETH reserve price -- additionally, the show will include an ETH and Tezos-based open edition. Three pieces will be available via SuperRare auctions. The reserve will be set to 1 ETH on Wednesday at 10AM. Three pieces available:- Doves by the Sea II, 1882- The Crowded Stage, 1905- Self Portrait, 1938 (Remaining pieces will be minted upon request after the show) pic.twitter.com/oUT5Kdj3oJ — ClownVamp (@ClownVamp) June 19, 2023 "Between 'Chester Charles' and his guest curation, ClownVamp is marrying the queer medium of AI with the legacies of queer artists, past, present, or fictional, unsung or in hiding, erased or hypervisible," said SuperRare. Elsewhere in art, see how ‘Human Unreadable’ showcases art longevity through an emotionally-driven experience. Click here to view full gallery at Hypemoon

Chester Charles: the Lost Grand Master, an Alt-History Exploration of Queer Art By ClownVamp

On Wednesday, June 21, artist ClownVamp (CV) will share his first solo exhibition with the world, through a physical showing taking place at The Oculus at the World Trade Center in New York City -- curated by SuperRare and powered by TransientLabs.

CV shared that Chester Charles: The Lost Grand Master is an immersive artificial intelligence (AI) driven, alternate history, storytelling experience, that explores the story of self-censorship in historical queer art through the lens of his protagonist, Chester Charles.

To learn more about the artist's inspirations and process, as well as the history and future of queer art, Hypemoon spoke with CV, who expressed that most of our reality is just a curated version of the truth but that AI could help expand or challenge perception.

CHESTER CHARLES: The Lost Grand Master.

Announcing my first-ever solo show. ?‍♂️

An immersive AI-driven story.

June 21st at The Oculus - World Trade Center

Produced by @SuperRare and Powered by @TransientLabs.

10 months of work, culminating. ?

A thread... pic.twitter.com/u3oUEcbYRT

— ClownVamp (@ClownVamp) June 7, 2023

Conversation with ClownVamp

"We are living in a curated version of the truth every time we walk through a museum," - ClownVamp

Sharing the goal of the exhibition, CV said he wanted a show that would challenge people's perceptions of history and the accuracy of what they've been taught. He explained that "Instead, we are living in a curated version of the truth every time we walk through a museum."

To achieve this effect, CV shared that he "wanted to reflect back a fiction that rhymed with the truth. As a result, there are a lot of details intended to be strong facsimiles of what an actual retrospective would look like. The art is incredibly high resolution with brush and canvas details, each piece comes with the text of what would be on a museum label, and there was an inscription 'found' on the back of each piece."

The works in question are created by CV's alt-history protagonist, Chester Charles, who is described as "a lost impressionist painter, a man who wandered life with insatiable curiosity, a man whose work was self-censored and hidden from history -- a gay man.

Inspirations, Motivations, Process

"I often work on random AI experiments. Sometimes because I want to explore a feeling. Sometimes to try a new tool. Sometimes just because," CV shared.

Explaining how he landed on the concept of an alt-history retrospective, CV said that "Last year, I was creating some AI explorations around the concepts of fatherhood, trying to prompt a father-son walking in the woods, in a historical art style. The AI models at the time were very prone to twinning where they would replicate the subjects in your prompt," adding that "As a result, the AI model created a scene of two dads and a son. Seeing this result on the screen created this instant stir in me."

The generated scene ushered in a sort of revelation for CV who said "I wasn’t used to seeing gay scenes, let alone scenes of gay parenting, in historical art. As a gay man, I didn’t realize just how much I missed that when walking through museum galleries until that moment."

CV shared that he was lucky to have a long time to work on the project, allowing him to create a "mental glitch effect" that would have the audience questioning the existence of the work in legitimate history.

"Over time, I iterated on the idea and got lots of feedback from smart friends including Chris from Transient and Mika from SuperRare, eventually what became clear was that the best way to tell the story was to tell it from the perspective of a fictional artist, Chester Charles," said CV.

He further explained that "Much of the art canon is told through an academic lens of analysis, evaluating careers and the underlying aesthetic and biographical changes. The idea here is to use that familiar paradigm as a jumping-off point for a story."

Alt-History & Self-Censorship

For ClownVamp, AI represents the "ultimate remix machine," he shared that "It allows you to mix and mash styles in a way that is perfect for this sort of experiment. While the training data for these AI models doesn’t have historical queer art, it does have queer art and it does have historical art. As a result, these models can allow you to reconfigure the past by bringing these concepts together."

While much of the performative art show is created as an alt-history, some aspects are historically accurate or at the very least nod to early queer art.

Examples of this are found in Jim Van Buskirk's article that observes the "Queer Impressions of Gustave Caillebotte," an 18th Century French Impressionist painter whose works often focused on the male form, depicting them in submissive forms and in the case of 'Man at his Bath,' completely nude from the perspective of the male gaze.

Other works like Boating Party [Oarsman in a Top Hat] see what might be considered to be "coded" queer art -- where the overall composition is nothing out of the ordinary but the focal point brings viewers' eyes to the crotch of the oarsman.

Buskirk shared that "My aim here is not one of reductivism. It is not for me to determine whether or not Gustave Caillebotte might have been homosexual -- whatever that means -- but I do wonder why art historians are failing to ask questions in an effort to illuminate aspects which obviously distinguish his work from that of his contemporaries."

He added, "I would hope that future examinations of Caillebotte’s oeuvre include an exploration of the 'queer' gaze so abundantly evident in his work."

Back to the conversation with CV, he explained that "When we think about art that doesn’t exist, or at least that we don’t have a record of. It goes deeper than just what wasn’t celebrated, or what wasn’t collected. It also goes to what wasn’t even made. Self-censorship is the way that queer people have often had to navigate their environments."

"Say the wrong thing and suffer consequences ranging from shame to literal death. The result is that there is a massive amount of human potential that was never even expressed. What could have been? What would have been? What should have been?" the artist put forward, explaining the significance of AI as a tool to further explore these questions.

Question Reality

"My goal is for the viewer to realize that what we think of as deeply factual or even academic, is really a flawed reality," shared CV adding that "It represents just what was allowed to be said, let alone recorded for posterity."

He further explained that "By showing an altered version of reality, my aim is for people to confront this. The show is meant to both transport you to a different time, but also feel unsettling at the same time. I want you to start to question the realities you face every day, and to think about how are some of these constraints still around today."

As CV looked to make the alt-history work as believable as possible, he mentioned that paint textures were a crucial aspect and explained that "One of the biggest breakthroughs was figuring out how to nail the painting textures I was looking for, [which he did] thanks to some great tips from friend and fellow artist Henry Daubrez and way too much time experimenting."

"The goal of the show is to make you question reality and so making it feel as close to real as possible was essential. I want you to smell the canvas when you look at the painting, especially when they’re blown up on the large screens of the physical show," he said.

Speaking on his partners, TransientLabs and SuperRare, CV said that "Chris Ostoich [COO] has been a lover and collector of AI work since before he joined Transient and has been a phenomenal brainstorming partner for how we can best leverage technology, narrative, and aesthetics to all work together."

As for the curation team from SuperRare, CV said that "Mika, Linda, and the SuperRare team have been phenomenal. I had thousands of pieces to pick from and they helped me craft the ideal aesthetic narrative that would also work on a storytelling level."

Those interested in viewing the solo exhibition, powered by TransientLabs and Curated by SuperRare will be able to attend both on-chain and in-person viewings starting June 21. A total of 23 pieces will be shown, with three to be auctioned on SuperRare starting at a 1 ETH reserve price -- additionally, the show will include an ETH and Tezos-based open edition.

Three pieces will be available via SuperRare auctions.

The reserve will be set to 1 ETH on Wednesday at 10AM.

Three pieces available:- Doves by the Sea II, 1882- The Crowded Stage, 1905- Self Portrait, 1938

(Remaining pieces will be minted upon request after the show) pic.twitter.com/oUT5Kdj3oJ

— ClownVamp (@ClownVamp) June 19, 2023

"Between 'Chester Charles' and his guest curation, ClownVamp is marrying the queer medium of AI with the legacies of queer artists, past, present, or fictional, unsung or in hiding, erased or hypervisible," said SuperRare.

Elsewhere in art, see how ‘Human Unreadable’ showcases art longevity through an emotionally-driven experience.

Click here to view full gallery at Hypemoon
Dmitri Cherniak's 'The Goose' Sells for $6.2M At Sotheby's 3AC AuctionScroll for any amount of time on NFT Twitter and you'll see renditions and remixes of Dmitri Cherniak's Ringers #879 aka "The Goose," as artists across the space celebrate Cherniak's significant Sotheby's sale, in what has been dubbed Goose Day. The Goose was auctioned as part of a continuance of the "Grails" collection sale, consisting of works once owned by the now defunct Three Arrows Capital (3AC) group. Estimated to sell between two and three million dollars, The Goose surprised bidders and the Web3 space alike, realizing over double at $6.2 million USD after fees to Punk6529 -- who was "prepared to go higher." Take a closer look at @dmitricherniak's Ringers #879, famously nicknamed 'The Goose.' Experts @katehannah, @sofiagarcia_io, and @michaelbouhanna discuss its captivating history & cultural significance. Discover more: https://t.co/pFiMNtYSpc pic.twitter.com/k28zaC6VbP — Sotheby's Metaverse (@Sothebysverse) June 9, 2023 ... and the final two minutes as the hammer came down, setting a new record for the artist and the 2nd highest price for a work of generative art. Congrats @punk6529! pic.twitter.com/QYzPcFIBmE — Sotheby's Metaverse (@Sothebysverse) June 15, 2023 What makes The Goose so significant though and why has it now sold for millions of dollars not once but twice? The best answer to these questions comes from Cherniak himself, who in a recent tweet said "The Goose is and was significant because it helped open up this kind of art to a new, technically savvy group of people whose idea of creativity or culture is not the same as yours." "Computer and code-based art is an art form that has been around for almost a century with very little fanfare, he explained, adding that "It is an extremely fascinating art form and has a rich history. It has been despised at many points throughout its history and its innovators were harassed. I am not an innovator in this sense, I have been able to develop my practice using the tools and documentation, techniques, as well as open source libraries mostly made by those who are my senior, and have contributed back where I can." Cherniak continued to state, "Automation is my artistic medium and after spending years as an engineer solving fascinating and complex problems in creative ways, I wanted to do the same for visual art to make a point - maybe to myself and also to others as well. NFTs have been mostly discussed as an economic vehicle and a form of social mobility for artists but for artists using code, where algorithms, engineering practices, and randomness are so intertwined not just with output but our iterative practices, this kind of distributed computing system is a native form to our work." While some verticals publish headlines like "3AC Bankruptcy Auction Nets $11M in What Might be Final Hurrah for NFTs," Cherniak said that the art form is "not going away and only more and more people will engage with coding as our population is forced to become more technical." In other news, on-chain generative choreography in "Human Unreadable" showcases art longevity through an emotionally-driven collection experience. Click here to view full gallery at Hypemoon

Dmitri Cherniak's 'The Goose' Sells for $6.2M At Sotheby's 3AC Auction

Scroll for any amount of time on NFT Twitter and you'll see renditions and remixes of Dmitri Cherniak's Ringers #879 aka "The Goose," as artists across the space celebrate Cherniak's significant Sotheby's sale, in what has been dubbed Goose Day.

The Goose was auctioned as part of a continuance of the "Grails" collection sale, consisting of works once owned by the now defunct Three Arrows Capital (3AC) group.

Estimated to sell between two and three million dollars, The Goose surprised bidders and the Web3 space alike, realizing over double at $6.2 million USD after fees to Punk6529 -- who was "prepared to go higher."

Take a closer look at @dmitricherniak's Ringers #879, famously nicknamed 'The Goose.' Experts @katehannah, @sofiagarcia_io, and @michaelbouhanna discuss its captivating history & cultural significance. Discover more: https://t.co/pFiMNtYSpc pic.twitter.com/k28zaC6VbP

— Sotheby's Metaverse (@Sothebysverse) June 9, 2023

... and the final two minutes as the hammer came down, setting a new record for the artist and the 2nd highest price for a work of generative art. Congrats @punk6529! pic.twitter.com/QYzPcFIBmE

— Sotheby's Metaverse (@Sothebysverse) June 15, 2023

What makes The Goose so significant though and why has it now sold for millions of dollars not once but twice? The best answer to these questions comes from Cherniak himself, who in a recent tweet said "The Goose is and was significant because it helped open up this kind of art to a new, technically savvy group of people whose idea of creativity or culture is not the same as yours."

"Computer and code-based art is an art form that has been around for almost a century with very little fanfare, he explained, adding that "It is an extremely fascinating art form and has a rich history. It has been despised at many points throughout its history and its innovators were harassed. I am not an innovator in this sense, I have been able to develop my practice using the tools and documentation, techniques, as well as open source libraries mostly made by those who are my senior, and have contributed back where I can."

Cherniak continued to state, "Automation is my artistic medium and after spending years as an engineer solving fascinating and complex problems in creative ways, I wanted to do the same for visual art to make a point - maybe to myself and also to others as well. NFTs have been mostly discussed as an economic vehicle and a form of social mobility for artists but for artists using code, where algorithms, engineering practices, and randomness are so intertwined not just with output but our iterative practices, this kind of distributed computing system is a native form to our work."

While some verticals publish headlines like "3AC Bankruptcy Auction Nets $11M in What Might be Final Hurrah for NFTs," Cherniak said that the art form is "not going away and only more and more people will engage with coding as our population is forced to become more technical."

In other news, on-chain generative choreography in "Human Unreadable" showcases art longevity through an emotionally-driven collection experience.

Click here to view full gallery at Hypemoon
LVMH Announces Partnerships With Epic Games and AppleDuring its most recent Innovation Award ceremony and show, Viva Tech, LVMH, the luxury conglomerate behind Louis Vuitton, revealed two new partnerships -- one with Epic Games to transform its creative pipeline and another with Apple to integrate the "Tap to Pay" system in U.S. stores. The announcements come on the heels of Louis Vuitton's recently revealed VIA Treasure Trunk, which represents the first time the Maison has offered any of its luxury trunks in a digital or NFT form. The show also included metaverse experiences and awards for those building tools that utilize artificial intelligence (AI). At @VivaTech, LVMH and Epic Games announce partnership to transform the Group’s creative pipeline and bring customers new types of immersive products and discovery experiences. Learn More: https://t.co/f7phMjbhbI#LVMH #MetaHuman #VivaTech @UnrealEngine pic.twitter.com/UIUe4efVZL — LVMH (@LVMH) June 14, 2023 Through its partnership with Epic Games, known for its Fortnite title and Unreal Engine platform, LVMH hopes to bring customers exciting new experiences including virtual fitting rooms, fashion shows, 360 product carousels, augmented reality (AR), product digital twins, and more. LVMH shared that, to achieve these goals, it would utilize Epic's suite of cutting-edge 3D creation tools, including Unreal Engine, Reality Capture, Twinmotion, and MetaHuman, to accelerate its growth in the digital space. "We have always been committed to innovations with the potential to bring our customers new experiences. Interactive games, which have developed into a full-fledged cultural phenomenon, are a perfect example. The partnership with Epic Games will accelerate our expertise in 3D tools and ecosystems, from the creation of new collections to ad campaigns and to our Maisons’ websites," shared Toni Belloni, LVMH Group Managing Director. Bill Clifford, VP of Unreal Engine at Epic Games, also shared his excitement about the partnership, emphasizing the transformative potential of Epic's suite of advanced creator tools. Clifford stated, "With this partnership, we will work with LVMH's designers to transform physical and digital product creation using Epic's suite of advanced creator tools. We are excited to accelerate the Group's adoption of Unreal Engine, Reality Capture, Twinmotion, and MetaHuman technology and help LVMH's global brands engage with customers through immersive digital experiences." Other highlights of the Viva Tech show included the announced plans to integrate Apple's "Tap to Pay" into physical retail stores, starting with those in the U.S. -- which the Maison expressed will create "an exciting new in-store experience." Additionally, LVMH held a variety of award ceremonies, including those for the AI sector, with Gonzague de Pirey, LVMH Group Omnichannel & Data Officer stating that "Data and AI figure at the heart of all the solutions from the startups we recognized today." The group also revealed new metaverse experiences through a virtual world called "The Journey," which prompts visitors to select from a variety of portals. Once a portal is selected, visitors are introduced to a variety of interactive elements, some containing AI artwork, information on design, videos that introduce creative teams, and more. Whether through its blockchain consortium platform Aura or the launch of the VIA Treasure Trunk NFT, LVMH has shown that it is committed to an elevated and continued pursuit of the digital space. In other news, could the blockchain change the face of watch trading? Click here to view full gallery at Hypemoon

LVMH Announces Partnerships With Epic Games and Apple

During its most recent Innovation Award ceremony and show, Viva Tech, LVMH, the luxury conglomerate behind Louis Vuitton, revealed two new partnerships -- one with Epic Games to transform its creative pipeline and another with Apple to integrate the "Tap to Pay" system in U.S. stores.

The announcements come on the heels of Louis Vuitton's recently revealed VIA Treasure Trunk, which represents the first time the Maison has offered any of its luxury trunks in a digital or NFT form. The show also included metaverse experiences and awards for those building tools that utilize artificial intelligence (AI).

At @VivaTech, LVMH and Epic Games announce partnership to transform the Group’s creative pipeline and bring customers new types of immersive products and discovery experiences.

Learn More: https://t.co/f7phMjbhbI#LVMH #MetaHuman #VivaTech @UnrealEngine pic.twitter.com/UIUe4efVZL

— LVMH (@LVMH) June 14, 2023

Through its partnership with Epic Games, known for its Fortnite title and Unreal Engine platform, LVMH hopes to bring customers exciting new experiences including virtual fitting rooms, fashion shows, 360 product carousels, augmented reality (AR), product digital twins, and more.

LVMH shared that, to achieve these goals, it would utilize Epic's suite of cutting-edge 3D creation tools, including Unreal Engine, Reality Capture, Twinmotion, and MetaHuman, to accelerate its growth in the digital space.

"We have always been committed to innovations with the potential to bring our customers new experiences. Interactive games, which have developed into a full-fledged cultural phenomenon, are a perfect example. The partnership with Epic Games will accelerate our expertise in 3D tools and ecosystems, from the creation of new collections to ad campaigns and to our Maisons’ websites," shared Toni Belloni, LVMH Group Managing Director.

Bill Clifford, VP of Unreal Engine at Epic Games, also shared his excitement about the partnership, emphasizing the transformative potential of Epic's suite of advanced creator tools. Clifford stated, "With this partnership, we will work with LVMH's designers to transform physical and digital product creation using Epic's suite of advanced creator tools. We are excited to accelerate the Group's adoption of Unreal Engine, Reality Capture, Twinmotion, and MetaHuman technology and help LVMH's global brands engage with customers through immersive digital experiences."

Other highlights of the Viva Tech show included the announced plans to integrate Apple's "Tap to Pay" into physical retail stores, starting with those in the U.S. -- which the Maison expressed will create "an exciting new in-store experience."

Additionally, LVMH held a variety of award ceremonies, including those for the AI sector, with Gonzague de Pirey, LVMH Group Omnichannel & Data Officer stating that "Data and AI figure at the heart of all the solutions from the startups we recognized today."

The group also revealed new metaverse experiences through a virtual world called "The Journey," which prompts visitors to select from a variety of portals. Once a portal is selected, visitors are introduced to a variety of interactive elements, some containing AI artwork, information on design, videos that introduce creative teams, and more.

Whether through its blockchain consortium platform Aura or the launch of the VIA Treasure Trunk NFT, LVMH has shown that it is committed to an elevated and continued pursuit of the digital space.

In other news, could the blockchain change the face of watch trading?

Click here to view full gallery at Hypemoon
Love Visiting State & National Parks? California State Parks Harnesses 'AR' Tech With New Interac...“I encourage everybody to hop on Google and type in ‘national park’ in whatever state they live in and see the beauty that lies in their own backyard. It’s that simple.” – Jordan Fisher, singer/songwriter Earlier this month, the California Department of Parks and Recreation shared an exciting announcement that makes for a fun utilization of augmented reality (AR) technology for those who appreciate the outdoors and the beauty our state and national parks have to offer, beginning with its very own parks.  California State Parks launched ‘Virtual Adventurer,’ a new AR mobile app that will transform how park visitors connect to and interact with California’s most iconic locations in addition to its deep history and diverse cultural and natural landscapes. Spanning across 9 participating state parks, including: Anza-Borrego Desert State Park Bodie State Historic Park Colonel Allensworth State Historic Park Jack London State Historic Park Montana de Oro State Park Oceano Dunes State Vehicular Recreation Area (Oso Flaco Lake) Old Town San Diego State Historic Park Point Lobos State Natural Reserve Sue-meg State Park Diving into the mobile app’s underlying AR technology, Virtual Adventurer offers the public experiences that span from storytelling and holograms to 3D images and digital reconstructions that all highlight California’s various cultural, historic, and natural resources.  The development of Virtual Adventurer was led by TimeLooper Inc., an immersive digital experience and exhibition firm.  “[California] State Parks came to us with a vision to expand the scope of stories told in its parks in a manner that is highly immersive and relevant to today’s park visitors,” said TimeLooper Principal and Founder Andrew Feinberg.  Unique to the experience is the app’s dynamic and evolving storytelling, that has also been designed to be one of the most accessible mobile apps on the market, offering users access to Americans with Disabilities Act-compliant accessible PDFs, audio descriptions, audio captioning, high-contrast colors, dyslexic font, and more – all with the intention of ensuring the highest level of accessibility to anyone who wants to use immerse themselves in this application of augmented reality.  For example, the public can download and travel through Coyote Canyon in today’s Anza-Borrego Desert State Park, with Maria Jacinta Bastida, an Afro-Latina woman traveling with the Juan Bautista De Anza expedition, or see Chinatown reemerge from the sagebrush at Bodie State Historic Park.  Virtual Adventurer, according to California State Parks, will also be updated regularly to include newly added adventures and stories that help enrich the overall experience of spending time in these state parks.  “We’re excited to launch the Virtual Adventurer app that further provides opportunities for Californians to access the cultural, historic and natural resources found across our beautiful state,” said California State Parks Director Armando Quintero. “The app also supports and enhances the department’s Reexamining Our Past Initiative by developing content for parks that tells a more complete, accurate and inclusive history of people and places.” Visitors are encouraged to scan the below QR code to get started exploring California’s state parks: “Helping park visitors to create deeper and more meaningful experiences in state parks is vitally important to connecting us all to the rich history of these places,” said Parks California Community Engagement Director Myrian Solis Coronel. “Through this app and emerging digital technology, we hope these tools will help all visitors see themselves as part of these special places and feel a sense of belonging.” Parks California, along with other park partners like Jack London Park Partners, Point Lobos Foundation, Tribal Nations, and university partners are also supporting content development. WATCH: Hypemoon’s Bon Jenn walks us through the new Apple Vision Pro VR headset. Click here to view full gallery at Hypemoon

Love Visiting State & National Parks? California State Parks Harnesses 'AR' Tech With New Interac...

“I encourage everybody to hop on Google and type in ‘national park’ in whatever state they live in and see the beauty that lies in their own backyard. It’s that simple.”

– Jordan Fisher, singer/songwriter

Earlier this month, the California Department of Parks and Recreation shared an exciting announcement that makes for a fun utilization of augmented reality (AR) technology for those who appreciate the outdoors and the beauty our state and national parks have to offer, beginning with its very own parks. 

California State Parks launched ‘Virtual Adventurer,’ a new AR mobile app that will transform how park visitors connect to and interact with California’s most iconic locations in addition to its deep history and diverse cultural and natural landscapes.

Spanning across 9 participating state parks, including:

Anza-Borrego Desert State Park

Bodie State Historic Park

Colonel Allensworth State Historic Park

Jack London State Historic Park

Montana de Oro State Park

Oceano Dunes State Vehicular Recreation Area (Oso Flaco Lake)

Old Town San Diego State Historic Park

Point Lobos State Natural Reserve

Sue-meg State Park

Diving into the mobile app’s underlying AR technology, Virtual Adventurer offers the public experiences that span from storytelling and holograms to 3D images and digital reconstructions that all highlight California’s various cultural, historic, and natural resources. 

The development of Virtual Adventurer was led by TimeLooper Inc., an immersive digital experience and exhibition firm. 

“[California] State Parks came to us with a vision to expand the scope of stories told in its parks in a manner that is highly immersive and relevant to today’s park visitors,” said TimeLooper Principal and Founder Andrew Feinberg. 

Unique to the experience is the app’s dynamic and evolving storytelling, that has also been designed to be one of the most accessible mobile apps on the market, offering users access to Americans with Disabilities Act-compliant accessible PDFs, audio descriptions, audio captioning, high-contrast colors, dyslexic font, and more – all with the intention of ensuring the highest level of accessibility to anyone who wants to use immerse themselves in this application of augmented reality. 

For example, the public can download and travel through Coyote Canyon in today’s Anza-Borrego Desert State Park, with Maria Jacinta Bastida, an Afro-Latina woman traveling with the Juan Bautista De Anza expedition, or see Chinatown reemerge from the sagebrush at Bodie State Historic Park. 

Virtual Adventurer, according to California State Parks, will also be updated regularly to include newly added adventures and stories that help enrich the overall experience of spending time in these state parks. 

“We’re excited to launch the Virtual Adventurer app that further provides opportunities for Californians to access the cultural, historic and natural resources found across our beautiful state,” said California State Parks Director Armando Quintero. “The app also supports and enhances the department’s Reexamining Our Past Initiative by developing content for parks that tells a more complete, accurate and inclusive history of people and places.”

Visitors are encouraged to scan the below QR code to get started exploring California’s state parks:

“Helping park visitors to create deeper and more meaningful experiences in state parks is vitally important to connecting us all to the rich history of these places,” said Parks California Community Engagement Director Myrian Solis Coronel. “Through this app and emerging digital technology, we hope these tools will help all visitors see themselves as part of these special places and feel a sense of belonging.” Parks California, along with other park partners like Jack London Park Partners, Point Lobos Foundation, Tribal Nations, and university partners are also supporting content development.

WATCH: Hypemoon’s Bon Jenn walks us through the new Apple Vision Pro VR headset.

Click here to view full gallery at Hypemoon
Hip-Hop’s Most Iconic Artifact Is Brought to Life in Photographer Barron Claiborne’s “The Crown”“We can’t change the world until we change ourselves.”  - The Notorious B.I.G. This week, photographer Barron Claiborne introduced his newest digital collectible, “The Crown,” depicting the original crown worn by rapper The Notorious B.I.G. in 1997, as part of his King of New York - KONY NFT Collection. The KONY NFT Collection is a limited collection consisting of six items, which include the physical and digital NFTs, auction items, and exclusive works that are all based on Claiborne’s 1997 KONY shoot depicting Biggie, whose real name was Christopher Latore Wallace (1972-1997).  Claiborne, a New York-based self-taught photographer and cinematographer, has emerged as a visionary artist that brings unparalleled contributions to American culture. Having made history in 1997 with his groundbreaking image, “The King of New York” (KONY), Claiborne was able to portray Biggie with nobility, authority, gravity and wisdom beyond his years.  His work has been featured in The New York Times Magazine, The New Yorker, Rolling Stone, Esquire, Paper, and Interview.  The OG crown worn by the rapper in 1997, sold for $600,000 USD at Sotheby’s in 2020, further solidifying its status as one of hip-hop’s most iconic artifacts.  The digital collectible, according to Chapter 2 agency, will be offered as an Open Edition granting owners, once they mint, access to an augmented reality (AR) filter that will allow them to engage with The Crown as it originally appeared in Claiborne’s iconic photograph, in addition to a 3D viewer that comes fully packed with rotation functions, edition information, and the authentic signatures of both The Notorious B.I.G. and Claiborne – just like the original photo. The venture is part of the upcoming KONY NFT Collection, a  limited collection of 6 items, which include physical and digital NFTS, auction items and exclusive works - based on the 1997 KONY shoot. The collection is a collaborative project with Keiyo Art with 5% of proceeds going to charity: water.  “The digital crown is for everyone, because it was the power of the people who made the image what it is – be the beauty you wish to see,” says Claiborne. The collection is currently available for purchase beginning Thursday, June 15 through its dedicated website – nft.barronclaiborne.art.  In other news, read about photographer Justin Aversano and his new ‘Smoke and Mirrors’ tarot and NFT collection. Click here to view full gallery at Hypemoon

Hip-Hop’s Most Iconic Artifact Is Brought to Life in Photographer Barron Claiborne’s “The Crown”

“We can’t change the world until we change ourselves.” 

- The Notorious B.I.G.

This week, photographer Barron Claiborne introduced his newest digital collectible, “The Crown,” depicting the original crown worn by rapper The Notorious B.I.G. in 1997, as part of his King of New York - KONY NFT Collection.

The KONY NFT Collection is a limited collection consisting of six items, which include the physical and digital NFTs, auction items, and exclusive works that are all based on Claiborne’s 1997 KONY shoot depicting Biggie, whose real name was Christopher Latore Wallace (1972-1997). 

Claiborne, a New York-based self-taught photographer and cinematographer, has emerged as a visionary artist that brings unparalleled contributions to American culture. Having made history in 1997 with his groundbreaking image, “The King of New York” (KONY), Claiborne was able to portray Biggie with nobility, authority, gravity and wisdom beyond his years. 

His work has been featured in The New York Times Magazine, The New Yorker, Rolling Stone, Esquire, Paper, and Interview. 

The OG crown worn by the rapper in 1997, sold for $600,000 USD at Sotheby’s in 2020, further solidifying its status as one of hip-hop’s most iconic artifacts. 

The digital collectible, according to Chapter 2 agency, will be offered as an Open Edition granting owners, once they mint, access to an augmented reality (AR) filter that will allow them to engage with The Crown as it originally appeared in Claiborne’s iconic photograph, in addition to a 3D viewer that comes fully packed with rotation functions, edition information, and the authentic signatures of both The Notorious B.I.G. and Claiborne – just like the original photo.

The venture is part of the upcoming KONY NFT Collection, a  limited collection of 6 items, which include physical and digital NFTS, auction items and exclusive works - based on the 1997 KONY shoot. The collection is a collaborative project with Keiyo Art with 5% of proceeds going to charity: water. 

“The digital crown is for everyone, because it was the power of the people who made the image what it is – be the beauty you wish to see,” says Claiborne.

The collection is currently available for purchase beginning Thursday, June 15 through its dedicated website – nft.barronclaiborne.art. 

In other news, read about photographer Justin Aversano and his new ‘Smoke and Mirrors’ tarot and NFT collection.

Click here to view full gallery at Hypemoon
Could the Blockchain Change the Face of Watch Trading?On Wednesday, June 14, two Rolex timepieces, a "Pepsi" GMT Master II and a Milguas Blue Dial were collateralized to secure a loan of $14,500 USDC -- all on-chain. The loan came together through a collaborative effort between DeFi lending protocol Arcade.xyz and 4K Protocol a platform that facilitates physical NFT minting and logistics for marketplaces, brands, dapps, and more. These @ROLEX watches, stored at @4KProtocol, are being used as collateral for DeFi loans on Arcade. Using Real-World Assets (RWAs) like luxury goods on-chain could open up a huge market for DeFi. pic.twitter.com/17JB2R7z6I — Arcade.xyz (@Arcade_xyz) June 14, 2023 In a recent tweet on the loan, Arcade advisor and Web3 personality Cirrus shared that the two Rolexes were collateralized at a 12 percent APR, with only 1.84 percent interest. He further explained the process, stating that "The Rolexes were sent to an escrow company [4K Protocol] who then sent back NFTs representing ownership of the watches." He shared that the idea is, the borrower can now use those NFTs to tap into global liquidity, instead of taking what might be considered a predatory loan from a local pawn shop. In the event the borrower defaults, the lender can use the NFTs to redeem the physical watches from escrow. Cirrus further expressed that this type of scenario is "One of the most obvious and easy to understand use cases of NFTs," and that "Global liquidity will eventually always get you better rates than local liquidity" -- a statement that is backed by the continued growth of Arcade.xyz, which just reached a milestone of $100 million USD in total loan volume on its platform. Lending mechanics aside, the tokenization of real-world assets (RWA) appears to be picking up steam across a variety of sectors -- whether through tokenized economies as expressed at a WEF panel in January or through the tokenization of physical collectibles like those from Courtyard. In the case of traditional assets, the panel expressed that the world could soon see the tokenization of RWAs like carbon credits, housing, electricity, government bonds, foreign exchange, and more. During the panel, Bitkub CEO Jirayut “Topp” Srupsrisopa expressed that "tokenization will be the foundation of the digital economy going forward." As for the $400 billion USD collectibles market, Courtyard believes that there is a prime opportunity to bring it on-chain, and has already begun to tokenize and custody collectibles like sneakers, rare trading cards, watches, and more. Whatever the asset is, if it can be bought, sold, traded, or collateralized -- chances are that in the near future, it will live on-chain, providing a larger and more accessible global market across sectors. In other news, adidas Originals and acclaimed artist FEWOCiOUS unveil exciting collaboration. Click here to view full gallery at Hypemoon

Could the Blockchain Change the Face of Watch Trading?

On Wednesday, June 14, two Rolex timepieces, a "Pepsi" GMT Master II and a Milguas Blue Dial were collateralized to secure a loan of $14,500 USDC -- all on-chain.

The loan came together through a collaborative effort between DeFi lending protocol Arcade.xyz and 4K Protocol a platform that facilitates physical NFT minting and logistics for marketplaces, brands, dapps, and more.

These @ROLEX watches, stored at @4KProtocol, are being used as collateral for DeFi loans on Arcade.

Using Real-World Assets (RWAs) like luxury goods on-chain could open up a huge market for DeFi. pic.twitter.com/17JB2R7z6I

— Arcade.xyz (@Arcade_xyz) June 14, 2023

In a recent tweet on the loan, Arcade advisor and Web3 personality Cirrus shared that the two Rolexes were collateralized at a 12 percent APR, with only 1.84 percent interest. He further explained the process, stating that "The Rolexes were sent to an escrow company [4K Protocol] who then sent back NFTs representing ownership of the watches."

He shared that the idea is, the borrower can now use those NFTs to tap into global liquidity, instead of taking what might be considered a predatory loan from a local pawn shop. In the event the borrower defaults, the lender can use the NFTs to redeem the physical watches from escrow.

Cirrus further expressed that this type of scenario is "One of the most obvious and easy to understand use cases of NFTs," and that "Global liquidity will eventually always get you better rates than local liquidity" -- a statement that is backed by the continued growth of Arcade.xyz, which just reached a milestone of $100 million USD in total loan volume on its platform.

Lending mechanics aside, the tokenization of real-world assets (RWA) appears to be picking up steam across a variety of sectors -- whether through tokenized economies as expressed at a WEF panel in January or through the tokenization of physical collectibles like those from Courtyard.

In the case of traditional assets, the panel expressed that the world could soon see the tokenization of RWAs like carbon credits, housing, electricity, government bonds, foreign exchange, and more. During the panel, Bitkub CEO Jirayut “Topp” Srupsrisopa expressed that "tokenization will be the foundation of the digital economy going forward."

As for the $400 billion USD collectibles market, Courtyard believes that there is a prime opportunity to bring it on-chain, and has already begun to tokenize and custody collectibles like sneakers, rare trading cards, watches, and more.

Whatever the asset is, if it can be bought, sold, traded, or collateralized -- chances are that in the near future, it will live on-chain, providing a larger and more accessible global market across sectors.

In other news, adidas Originals and acclaimed artist FEWOCiOUS unveil exciting collaboration.

Click here to view full gallery at Hypemoon
E.U. Parliament Just Approved Landmark ‘A.I. Act’, and It's Heavier Than GDPRThe European Union (EU) took a major step on Wednesday in giving the world its first set of governance rules surrounding artificial intelligence (AI), as the European Parliament passed a draft law known as the A.I. Act.  The A.I. Act, which was first proposed by the EU on April 21, 2021, has been touted as the world’s first and most comprehensive regulatory framework. Since its proposal, the European Commission, the Council of the European Union, and European Parliament have been working on modifying and refining its initial draft, with a final version not expected until later this year. While the initial draft came prior to the surge of generative AI, including chatbots, this new draft law certainly takes into consideration generative AI systems like OpenAI’s ChatGPT and the implications they bring.  Testing Before Deployment The Act, which applies to all EU member nations, requires EU member states to establish at least one regulatory “sandbox” to test AI systems before they are deployed. “The one thing that we wanted to achieve with this text is balance,” Dragoș Tudorache, a member of the European Parliament, told journalists. The Act protects citizens while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he added. Greatest Potential for Human Harm At the heart of Wednesday’s passed bill, generative AI would be subject to new transparency requirements, which include publishing summaries of copyrighted material used for training the system as well as implementing safeguards to prevent the AI from generating illegal content.  The bill’s “risk-based” approach to regulating AI focuses on use cases with the greatest potential for human harm, including the utilization of AI systems to operate critical infrastructures like water or energy, our legal justice system, and determining access to public services and government benefits.  Developers of this technology would be required to conduct risk assessments before putting that technology into the mainstream, similar to how the drug approval process currently operates.  Banning of Facial Recognition Another heated area of debate revolves around the use of facial recognition technology. The European Parliament voted to ban the use of live facial recognition, but is still open to whether there should be an exception for its use in cases of national security and other law enforcement purposes.  Adding to that, another provision of the A.I. Act would ban companies from scraping biometric data from social media to help build out databases – something that was a major issue after facial-recognition company Clearview AI was fined $20 million USD by France’s data protection authority (CNIL) for the illegal collection and processing of biometric data belonging to French citizens, on top of an overdue $5.2 million USD fine.  Generative AI to Add $4.4 Trillion USD to Global Economy? Generative AI is estimated to add $4.4 trillion USD to the global economy, according to a Wednesday report published by McKinsey & Company.  While that figure is at the upper-end of McKinsey’s projected range for generative AI’s value, the report titled “The Economic Potential of Generative AI” also provides the lower end of the range which rests at $2.6 trillion USD.  During the Viva Conference in Paris on Wednesday, Meta’s chief AI scientist Yann LeCun spoke to the current limitations of generative AI, specifically those that are trained on large language models.  According to LeCun, generative AI that is solely trained on language aren’t very intelligent. “Those systems are very limited, they don’t have any understanding of the underlying reality of the real world, because they are purely trained on text, massive amounts of text,” LeCun said. “Most of human knowledge has nothing to do with language…so that part of the human experience is not captured by AI.” “We have made history today,” Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists. “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei added. This week, more than 40% of business leaders at the Yale CEO Summit, including Walmart chief Doug McMillion and Coca-Cola CEO James Quincy, all shared their belief that AI has the potential of destroying humanity five to 10 years from now.  The AI Act takes the penalties that we’ve seen from Europe’s GDPR privacy framework a step further, where companies engaged in prohibited AI practices could be fined up to $43 million USD or an amount equal to 7% of a company’s worldwide annual turnover, whichever is higher.  GDPR currently structures its fines up to $10.8 million USD, or up to 2% of a firm’s global turnover, whichever is higher. In other news, read about hyperdimensional computing and why we may be looking at AI all wrong. Click here to view full gallery at Hypemoon

E.U. Parliament Just Approved Landmark ‘A.I. Act’, and It's Heavier Than GDPR

The European Union (EU) took a major step on Wednesday in giving the world its first set of governance rules surrounding artificial intelligence (AI), as the European Parliament passed a draft law known as the A.I. Act. 

The A.I. Act, which was first proposed by the EU on April 21, 2021, has been touted as the world’s first and most comprehensive regulatory framework. Since its proposal, the European Commission, the Council of the European Union, and European Parliament have been working on modifying and refining its initial draft, with a final version not expected until later this year.

While the initial draft came prior to the surge of generative AI, including chatbots, this new draft law certainly takes into consideration generative AI systems like OpenAI’s ChatGPT and the implications they bring. 

Testing Before Deployment

The Act, which applies to all EU member nations, requires EU member states to establish at least one regulatory “sandbox” to test AI systems before they are deployed.

“The one thing that we wanted to achieve with this text is balance,” Dragoș Tudorache, a member of the European Parliament, told journalists. The Act protects citizens while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he added.

Greatest Potential for Human Harm

At the heart of Wednesday’s passed bill, generative AI would be subject to new transparency requirements, which include publishing summaries of copyrighted material used for training the system as well as implementing safeguards to prevent the AI from generating illegal content. 

The bill’s “risk-based” approach to regulating AI focuses on use cases with the greatest potential for human harm, including the utilization of AI systems to operate critical infrastructures like water or energy, our legal justice system, and determining access to public services and government benefits. 

Developers of this technology would be required to conduct risk assessments before putting that technology into the mainstream, similar to how the drug approval process currently operates. 

Banning of Facial Recognition

Another heated area of debate revolves around the use of facial recognition technology. The European Parliament voted to ban the use of live facial recognition, but is still open to whether there should be an exception for its use in cases of national security and other law enforcement purposes. 

Adding to that, another provision of the A.I. Act would ban companies from scraping biometric data from social media to help build out databases – something that was a major issue after facial-recognition company Clearview AI was fined $20 million USD by France’s data protection authority (CNIL) for the illegal collection and processing of biometric data belonging to French citizens, on top of an overdue $5.2 million USD fine. 

Generative AI to Add $4.4 Trillion USD to Global Economy?

Generative AI is estimated to add $4.4 trillion USD to the global economy, according to a Wednesday report published by McKinsey & Company. 

While that figure is at the upper-end of McKinsey’s projected range for generative AI’s value, the report titled “The Economic Potential of Generative AI” also provides the lower end of the range which rests at $2.6 trillion USD. 

During the Viva Conference in Paris on Wednesday, Meta’s chief AI scientist Yann LeCun spoke to the current limitations of generative AI, specifically those that are trained on large language models. 

According to LeCun, generative AI that is solely trained on language aren’t very intelligent.

“Those systems are very limited, they don’t have any understanding of the underlying reality of the real world, because they are purely trained on text, massive amounts of text,” LeCun said. “Most of human knowledge has nothing to do with language…so that part of the human experience is not captured by AI.”

“We have made history today,” Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists.

“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei added.

This week, more than 40% of business leaders at the Yale CEO Summit, including Walmart chief Doug McMillion and Coca-Cola CEO James Quincy, all shared their belief that AI has the potential of destroying humanity five to 10 years from now. 

The AI Act takes the penalties that we’ve seen from Europe’s GDPR privacy framework a step further, where companies engaged in prohibited AI practices could be fined up to $43 million USD or an amount equal to 7% of a company’s worldwide annual turnover, whichever is higher. 

GDPR currently structures its fines up to $10.8 million USD, or up to 2% of a firm’s global turnover, whichever is higher.

In other news, read about hyperdimensional computing and why we may be looking at AI all wrong.

Click here to view full gallery at Hypemoon
Travel the World With Snoop Dogg Through Dynamic Passport Token Powered By Transient LabsAs Snoop Dogg prepares for his upcoming world tour with Wiz Khalifa, he's decided to team up with Web3 innovation platform Transient Labs to launch an Ethereum layer-2 NFT on the Arbitrum chain. Referred to as the "Snoop Dog Passport," or simply "Passport," the token brings collectors on tour with Snoop, providing a dynamic collecting experience with constantly updated content that brings holders backstage and on the tour bus. "Really what the passport is, is three things: it's a look behind the scenes, with dynamic content for fans, it's access, with a soon-to-be token gated website with merch and music, and third, there's airdrops from some of the digital ecosystems greatest creators," said Transient Labs COO Chris Ostoich in a conversation with Coindesk. Priced at just 0.025 ETH or $42 USD, the Passport grants holders access to art airdrops from already announced names like ALIENQUEEN, Terrell Jones, and Coldie. The pass was also designed to be accessible to non-Web3 natives, through a partnership with payment solutions provider Crossmint -- enabling credit and debit card checkout. Stoked to take this idea to production -- Coming to a US city near you -- excited for everyone to experience @ALIENQUEENNFT, @Coldie, @terrelldom before the Snoop shows and help us bridge the web2 world to web3 via @SnoopDogg community passport, @crossmint + @arbitrum + us… — David Feinstein (@DavidFeinsteinn) June 13, 2023 Transient Labs shared on Twitter that the Passport token exemplifies the intersection of art and innovation, with Ostoich explaining that "We had this interesting and novel innovation called Dynamic Refresh. We figured out a way to update the metadata of a token, without a metadata refresh." He went on to explain that the concept has really interesting implications for any regular content creators, particularly in the entertainment industry as a tool that allows creators to give fans a peak behind the scenes. In the case of Snoop Dogg, he shared that their team is constantly uploading content from their tours, whether its backstage or on the bus -- with the token being constantly and dynamically updated to provide real-time insight into "what's happening in the world of Snoop Dogg." Attempting to define the offering, Ostoich shared that it kind of feels like "the first interactive and evolving tour poster." The COO also shared that the dynamic product won't begin and end with the Snoop Dogg launch but that it will soon be available to other artists who are looking for a way to deliver their content in a more connected way. In other news, adidas Originals and acclaimed artist FEWOCiOUS unveil exciting collaboration. Click here to view full gallery at Hypemoon

Travel the World With Snoop Dogg Through Dynamic Passport Token Powered By Transient Labs

As Snoop Dogg prepares for his upcoming world tour with Wiz Khalifa, he's decided to team up with Web3 innovation platform Transient Labs to launch an Ethereum layer-2 NFT on the Arbitrum chain.

Referred to as the "Snoop Dog Passport," or simply "Passport," the token brings collectors on tour with Snoop, providing a dynamic collecting experience with constantly updated content that brings holders backstage and on the tour bus.

"Really what the passport is, is three things: it's a look behind the scenes, with dynamic content for fans, it's access, with a soon-to-be token gated website with merch and music, and third, there's airdrops from some of the digital ecosystems greatest creators," said Transient Labs COO Chris Ostoich in a conversation with Coindesk.

Priced at just 0.025 ETH or $42 USD, the Passport grants holders access to art airdrops from already announced names like ALIENQUEEN, Terrell Jones, and Coldie. The pass was also designed to be accessible to non-Web3 natives, through a partnership with payment solutions provider Crossmint -- enabling credit and debit card checkout.

Stoked to take this idea to production --

Coming to a US city near you -- excited for everyone to experience @ALIENQUEENNFT, @Coldie, @terrelldom before the Snoop shows and help us bridge the web2 world to web3 via @SnoopDogg community passport, @crossmint + @arbitrum + us…

— David Feinstein (@DavidFeinsteinn) June 13, 2023

Transient Labs shared on Twitter that the Passport token exemplifies the intersection of art and innovation, with Ostoich explaining that "We had this interesting and novel innovation called Dynamic Refresh. We figured out a way to update the metadata of a token, without a metadata refresh." He went on to explain that the concept has really interesting implications for any regular content creators, particularly in the entertainment industry as a tool that allows creators to give fans a peak behind the scenes.

In the case of Snoop Dogg, he shared that their team is constantly uploading content from their tours, whether its backstage or on the bus -- with the token being constantly and dynamically updated to provide real-time insight into "what's happening in the world of Snoop Dogg."

Attempting to define the offering, Ostoich shared that it kind of feels like "the first interactive and evolving tour poster." The COO also shared that the dynamic product won't begin and end with the Snoop Dogg launch but that it will soon be available to other artists who are looking for a way to deliver their content in a more connected way.

In other news, adidas Originals and acclaimed artist FEWOCiOUS unveil exciting collaboration.

Click here to view full gallery at Hypemoon
Why ‘Human Unreadable’ Showcases Art Longevity Through an Emotionally-Driven Collection ExperienceOver the course of nine months and pouring their life’s work into what has been one of Art Block’s most successful projects, Dejha Ti and Ania Catherine developed an on-chain generative choreography method that serves as the backbone to their now sold-out “Human Unreadable” digital art collection.  Having minted out within 30 minutes, “Human Unreadable” is the brainchild of both Catherine and Ti, who have spent countless hours in the creation of a method that prioritizes “human messiness and chaos” within a highly mathematical and engineering heavy process. Catherine and Ti are an award-winning experiential artist duo who create through their collective art practice, Operator, which they launched in 2016.  As two “critical contemporary voices” on digital art’s international stages, the duo and ‘LGBT power couple’ welcome their expertises to collide in large scale conceptual works that are highly recognized for their nuanced integration of emerging technologies.  Ti, whose background as an immersive artist and human-computer Interaction technologist, and Catherine’s as a choreographer and performance artists bring together two environments that showcase a beautiful harmony of our current digital infrastructure with that of Web3.  The Berlin-based duo They have appeared on BBC Click, Bloomberg ART+TECHNOLOGY, Christie’s Art+Tech Summit, SCAD Museum of Art, MIT Open Doc Lab, Art Basel, and many more.  Spanning across a three-act play – Reveal, Decipher, and Witness – Human Unreadable’s story unfolds over the course of several months, with the artwork reveal taking place this spring, the uncovering of the choreographies used to create the generative model at the end of June, and lastly, a live performance of those choreographies from the first 100 pieces in the collection (#2 to #101) later this year.  In bringing the pieces of Human Unreadable to life, Ti and Catherine built a team of more than 25 people – from highly experienced engineers to professional dancers – to help give life to the choreography as it was combined with black-and-white portrait photos of them, X-ray shading, and generative glass objects.  With choreography at the heart of Human Unreadable, Catherine and Li have proudly defended against ever wanting to separate the underlying choreography from the secondary token that's bound to the primary Art Blocks token, because it’s that choreographic score and unique sequence that generated the Art Blocks token to begin with.  “Everyone assumes that the reveal of the artwork is the end of the story,” Catherine stated in a Twitter Spaces on May 25, hosted by David Cash of Cash Labs. She touched on the industry “go-to” of traditional collecting and the experiences attached to them, distinguishing the different mindset one has if you approach art as if it were a theater or ballet performance – divided into “acts.” Thankfully, the digital art community is finally beginning to understand the value beyond a traditional mint, as the reveal is only a small component in an artwork’s journey of creating genuine impact and leaving a lasting legacy.  Through the fusion of code, choreography, and generative art, Human Unreadable is a perfect embodiment of evolving art that redefined what it means to pour one’s soul into a piece, while advocating for an emotionally-fueled NFT minting experience. Vulnerability and Meaningful Exploitation When it comes to injecting heart and soul into the project, Ti spoke to Hypemoon about the thematic element of vulnerability and exploitation that clearly defines the foundation of Human Unreadable: “Hero your voice, hero the concept. Avoid the temptation to hide behind the novelty of technology or market mechanisms. Avoid masking your voice or expression with what technology can do, but instead use technology to dig deeper into and/or expressing other selves – even if it feels risky, imperfect, and doesn’t fit into what people expect to encounter in a sea of polished digital personas.” It’s in these very moments that both Catherine and Ti embrace the reality of failure and/or exploitation and how to navigate those waters, which many come to fear and work to avoid. “That takes vulnerability and courage because there is a chance of failure or feeling exposed. What we do know for sure is tech doesn’t age well, but concept and honesty do,” Ti added.  When it comes to artists showcasing their work and putting themselves out there to such a large number of people, exploitation and how we perceive that type of public presentation can certainly change depending on the underlying motivations. “Unfortunately, the world is full of exploitative scenarios for artists, not just limited to Web3. Artists need to always remind themselves that they bring value to the table, and also keep that in mind when they see an ‘opportunity for artists’ to look closely in making sure it's not just an opportunity for people who don’t care about art to extract their value,” Ti says. In that context, she also emphasized the importance of artists knowing “when to be protective and guarded.” “At the same time, artists can’t and shouldn’t try to do everything themselves—it's not effective, it’s not good for the art and will cause burn out. Operator’s practice is highly collaborative, not just in the creative sense, but also in the operational sense. For us, we only work with kind people where there is high trust and honest communication. If there is respect, trust and an intimate understanding of the art practice, then there’s more room to be open with collaborators and partners which is essential for making exceptional things happen.” At the end of the day, both Ti and Catherine want collectors to embrace the beauty and nuance of "human messiness." “We want collectors to walk away with: a piece that reminds them of the beauty of complexity and human messiness, the feeling that vulnerability is not a weakness, excitement that they are at the beginning of choreography being collected as an art object, and curiosity to further explore movement and performance." In other news, read about AI startup Gensyn landing a $43 million USD funding round, led by a16z Click here to view full gallery at Hypemoon

Why ‘Human Unreadable’ Showcases Art Longevity Through an Emotionally-Driven Collection Experience

Over the course of nine months and pouring their life’s work into what has been one of Art Block’s most successful projects, Dejha Ti and Ania Catherine developed an on-chain generative choreography method that serves as the backbone to their now sold-out “Human Unreadable” digital art collection. 

Having minted out within 30 minutes, “Human Unreadable” is the brainchild of both Catherine and Ti, who have spent countless hours in the creation of a method that prioritizes “human messiness and chaos” within a highly mathematical and engineering heavy process.

Catherine and Ti are an award-winning experiential artist duo who create through their collective art practice, Operator, which they launched in 2016. 

As two “critical contemporary voices” on digital art’s international stages, the duo and ‘LGBT power couple’ welcome their expertises to collide in large scale conceptual works that are highly recognized for their nuanced integration of emerging technologies. 

Ti, whose background as an immersive artist and human-computer Interaction technologist, and Catherine’s as a choreographer and performance artists bring together two environments that showcase a beautiful harmony of our current digital infrastructure with that of Web3. 

The Berlin-based duo They have appeared on BBC Click, Bloomberg ART+TECHNOLOGY, Christie’s Art+Tech Summit, SCAD Museum of Art, MIT Open Doc Lab, Art Basel, and many more. 

Spanning across a three-act play – Reveal, Decipher, and Witness – Human Unreadable’s story unfolds over the course of several months, with the artwork reveal taking place this spring, the uncovering of the choreographies used to create the generative model at the end of June, and lastly, a live performance of those choreographies from the first 100 pieces in the collection (#2 to #101) later this year. 

In bringing the pieces of Human Unreadable to life, Ti and Catherine built a team of more than 25 people – from highly experienced engineers to professional dancers – to help give life to the choreography as it was combined with black-and-white portrait photos of them, X-ray shading, and generative glass objects. 

With choreography at the heart of Human Unreadable, Catherine and Li have proudly defended against ever wanting to separate the underlying choreography from the secondary token that's bound to the primary Art Blocks token, because it’s that choreographic score and unique sequence that generated the Art Blocks token to begin with. 

“Everyone assumes that the reveal of the artwork is the end of the story,” Catherine stated in a Twitter Spaces on May 25, hosted by David Cash of Cash Labs. She touched on the industry “go-to” of traditional collecting and the experiences attached to them, distinguishing the different mindset one has if you approach art as if it were a theater or ballet performance – divided into “acts.”

Thankfully, the digital art community is finally beginning to understand the value beyond a traditional mint, as the reveal is only a small component in an artwork’s journey of creating genuine impact and leaving a lasting legacy. 

Through the fusion of code, choreography, and generative art, Human Unreadable is a perfect embodiment of evolving art that redefined what it means to pour one’s soul into a piece, while advocating for an emotionally-fueled NFT minting experience.

Vulnerability and Meaningful Exploitation

When it comes to injecting heart and soul into the project, Ti spoke to Hypemoon about the thematic element of vulnerability and exploitation that clearly defines the foundation of Human Unreadable:

“Hero your voice, hero the concept. Avoid the temptation to hide behind the novelty of technology or market mechanisms. Avoid masking your voice or expression with what technology can do, but instead use technology to dig deeper into and/or expressing other selves – even if it feels risky, imperfect, and doesn’t fit into what people expect to encounter in a sea of polished digital personas.”

It’s in these very moments that both Catherine and Ti embrace the reality of failure and/or exploitation and how to navigate those waters, which many come to fear and work to avoid.

“That takes vulnerability and courage because there is a chance of failure or feeling exposed. What we do know for sure is tech doesn’t age well, but concept and honesty do,” Ti added. 

When it comes to artists showcasing their work and putting themselves out there to such a large number of people, exploitation and how we perceive that type of public presentation can certainly change depending on the underlying motivations.

“Unfortunately, the world is full of exploitative scenarios for artists, not just limited to Web3. Artists need to always remind themselves that they bring value to the table, and also keep that in mind when they see an ‘opportunity for artists’ to look closely in making sure it's not just an opportunity for people who don’t care about art to extract their value,” Ti says.

In that context, she also emphasized the importance of artists knowing “when to be protective and guarded.”

“At the same time, artists can’t and shouldn’t try to do everything themselves—it's not effective, it’s not good for the art and will cause burn out. Operator’s practice is highly collaborative, not just in the creative sense, but also in the operational sense. For us, we only work with kind people where there is high trust and honest communication. If there is respect, trust and an intimate understanding of the art practice, then there’s more room to be open with collaborators and partners which is essential for making exceptional things happen.”

At the end of the day, both Ti and Catherine want collectors to embrace the beauty and nuance of "human messiness."

“We want collectors to walk away with: a piece that reminds them of the beauty of complexity and human messiness, the feeling that vulnerability is not a weakness, excitement that they are at the beginning of choreography being collected as an art object, and curiosity to further explore movement and performance."

In other news, read about AI startup Gensyn landing a $43 million USD funding round, led by a16z

Click here to view full gallery at Hypemoon
Are We Looking At AI All Wrong? Why We May Be Ready for the Next Stage of Computing to Help Us Be...As humans, symbolism is the key to understanding the world around us, it’s how we interpret objects, ideas, and the relationships between and among them.  We are wholly dependent upon analogy, which is what makes our current computing technology extremely convoluted, complex, and at this point in time, archaic.  The growing popularity of artificial intelligence (AI) and the use cases we are already seeing with OpenAI's ChatGPT aren't necessarily the best applications that go beyond mere "hype" and stock inflation. Under traditional computing, we don’t fully understand what these artificial neural networks (ANN) are doing or why they even work as well as they do. The utter lack of transparency also provides a major disadvantage in our understanding of how data is collected and analyzed to spit out the results we so desperately attach ourselves to that we come to label as “progress.” Consider the following example of an ANN that is able to distinguish “circles” and “squares” from one another.  One way to achieve that distinction is the obvious – if one output layer indicates a circle, and the other indicates a square.  But what if you wanted the ANN to discern that particular shape’s “color” – is it “red” or “blue”?  Since “color” is an entirely separate data set, it requires additional output neurons to be able to account for that feature in the final output. In this case, there would need to be four output neurons – one each for the blue circle, blue square, red circle, and red square.  Now, what if we wanted a computation that also considered additional information, such as “size” or “position/location”?  More features mean more neurons that need to account for each possibility associated in defining that particular feature (or combination of features) with the “circle” and the “square”.  In other words, it becomes incredibly complex.  Bruno Olshausen, a neuroscientist at the University of California, Berkeley, recently spoke to this need for having a neuron for every possible combination of features. “This can’t be how our brains perceive the natural world, with all its variations. You have to propose…a neuron for all combinations,” he said, further explaining that we in essence, would need “a purple Volkswagen detector” or something so obscure to account for every possible combination of information we are hoping to consider in any given experiment. Enter ‘hyperdimensional computing’. What Is ‘Hyperdimensional Computing’? The heart of hyperdimensional computing is the algorithm’s ability to decipher specific pieces of information from complex images (think of metadata) and then represent that collective information as a single entity, known as a “hyperdimensional vector.” Unlike traditional computing, hyperdimensional computing allows us to solve problems symbolically and in a sense, be able to efficiently and accurately “predict” the outcome of a particular problem based on the data contained in the hyperdimensional vector.  What Olshausen argues, among other colleagues, is that information in the brain is represented by the activity of a ton of neurons, making the perception of our fictitious “purple Volkswagen” impossible to be contained by a single neuron’s actions, but instead, through thousands of neurons that, collectively, come to comprise a purple Volkswagen. With the same set of neurons acting differently, we could see an entirely different concept or result, such as a pink Cadillac.  The key, according to a recent discussion in WIRED, is that each piece of information, such as the idea of a car or its make, model, color, or all of them combined, is represented as a single entity – a hyperdimensional vector or hypervector. A “vector” is just an ordered array of numbers – 1, 2, 3, etc – where a 3D vector consists of three numbers – the x, y, and z coordinates of an exact point in 3D space. A “hypervector”, on the other hand, could be an array of thousands or hundreds of thousands of numbers that represent a point in that amount of dimensional space. For example, a hypervector that represents an array of 10,000 numbers represents a point in 10,000-dimensional space.  This level of abstraction affords us the flexibility and ability to evolve modern computing and harmonize it with emerging technologies, such as artificial intelligence (AI).  “This is the thing that I’ve been most excited about, practically in my entire career,” Olshausen said. To him and many others, hyperdimensional computing promises a new world in which computing is efficient and robust and machine-made decisions are entirely transparent. Transforming ‘Metadata’ Into Hyperdimensional Algorithms to Generate Complex Results The underlying algebra tells us why the system chose that particular answer, which cannot be said for traditional neural networks.  In developing hybrid systems in which these neural networks can map things out IRL to hypervectors, and then allow for hyperdimensional algebra to take over is the crux of how AI should be used to actually empower us to better understand the world around us. “This is what we should expect of any AI system,” says Olshausen. “We should be able to understand it just like we understand an airplane or a television set.” Going back to the example with “circles” and “squares” and applying it to high-dimension spaces, we need vectors to represent the variables of “shape” and “color” – but also, we need vectors to represent the values that can be assigned to the variables – “CIRCLE”, “SQUARE”, “BLUE”, and “RED.” Most importantly, these vectors must be distinct enough to actually quantify these variables.  Now, let’s turn attention to Eric Weiss, a student of Olshausen, who in 2015, demonstrated one aspect of hyperdimensional computing’s unique abilities in how to best represent a complex image as a single hyperdimensional vector that contains information about ALL the objects in the image – colors, positions, sizes.  In other words, an extremely advanced representation of an image’s metadata.  “I practically fell out of my chair,” Olshausen said. “All of a sudden, the light bulb went on.” At that moment, more teams began focusing their efforts on developing “hyperdimensional algorithms” to replicate the “simple” tasks that deep neural networks had already been engaged in two decades prior – such as classifying images.  Creating a ‘Hypervector’ For Each Image For example, if you were to take an annotated data set that consists of images of handwritten digits, this hyperdimensional algorithm would analyze the specific features of each image, creating a “hypervector” for each image. Creating a “Class” of Hypervectors for Each Digit From there, the algorithm would add the hypervectors for all images of “zero” to create a hypervector for the “idea of zero,” and repeats that for all the digits, generating 10 “class” hypervectors – one for each digit.  Those stored classes of hypervectors are now measured and analyzed against the hypervector created for a new, unlabeled image for the purpose of the algorithm determining which digit most closely matches the new image (based on the predetermined class of hypervectors for each digit).  IBM Research Dives In In March, Abbas Rahimi and two colleagues at IBM Research in Zurich used hyperdimensional computing with neural networks to solve a classic problem in abstract visual reasoning – something that has presented a significant challenge for typical ANNs, and even some humans.  The team first created a “dictionary” of hypervectors to represent the objects in each image, where each hypervector in the dictionary represented a specific object and some combination of its attributes.  From there, the team trained a neural network to examine an image to generate a bipolar hypervector – where a particular attribute or element can be a +1 or -1.  “You guide the neural network to a meaningful conceptual space,” Rahimi said. The value here is that once the network has generated hypervectors for each of the context images, and for each candidate for the blank slot, another algorithm is used to analyze the hypervectors to create “probability distributions” for a number of objects in the image. In other words, algebra is able to be used to predict the most likely candidate image to fill the vacant slot. And the team’s approach yielded a near 88 percent accuracy on one set of problems, where neural network-only solutions were less than 61 percent accurate. We’re Still In Infancy Despite its many advantages, hyperdimensional computing is still very much in its infancy and requires testing against real-world problems and at much bigger scales than what we’ve seen so far – for example, the need to efficiently search over 1 billion items or results and find a specific result.  Ultimately, this will come with time, but it does present the questions of where and how we apply and integrate the use of artificial intelligence.  Read about how a 40-minute church service, powered by AI, drew in over 300 attendees in Germany as a first-of-its-kind experiment. Click here to view full gallery at Hypemoon

Are We Looking At AI All Wrong? Why We May Be Ready for the Next Stage of Computing to Help Us Be...

As humans, symbolism is the key to understanding the world around us, it’s how we interpret objects, ideas, and the relationships between and among them. 

We are wholly dependent upon analogy, which is what makes our current computing technology extremely convoluted, complex, and at this point in time, archaic. 

The growing popularity of artificial intelligence (AI) and the use cases we are already seeing with OpenAI's ChatGPT aren't necessarily the best applications that go beyond mere "hype" and stock inflation.

Under traditional computing, we don’t fully understand what these artificial neural networks (ANN) are doing or why they even work as well as they do. The utter lack of transparency also provides a major disadvantage in our understanding of how data is collected and analyzed to spit out the results we so desperately attach ourselves to that we come to label as “progress.”

Consider the following example of an ANN that is able to distinguish “circles” and “squares” from one another. 

One way to achieve that distinction is the obvious – if one output layer indicates a circle, and the other indicates a square. 

But what if you wanted the ANN to discern that particular shape’s “color” – is it “red” or “blue”? 

Since “color” is an entirely separate data set, it requires additional output neurons to be able to account for that feature in the final output. In this case, there would need to be four output neurons – one each for the blue circle, blue square, red circle, and red square. 

Now, what if we wanted a computation that also considered additional information, such as “size” or “position/location”? 

More features mean more neurons that need to account for each possibility associated in defining that particular feature (or combination of features) with the “circle” and the “square”. 

In other words, it becomes incredibly complex. 

Bruno Olshausen, a neuroscientist at the University of California, Berkeley, recently spoke to this need for having a neuron for every possible combination of features.

“This can’t be how our brains perceive the natural world, with all its variations. You have to propose…a neuron for all combinations,” he said, further explaining that we in essence, would need “a purple Volkswagen detector” or something so obscure to account for every possible combination of information we are hoping to consider in any given experiment.

Enter ‘hyperdimensional computing’.

What Is ‘Hyperdimensional Computing’?

The heart of hyperdimensional computing is the algorithm’s ability to decipher specific pieces of information from complex images (think of metadata) and then represent that collective information as a single entity, known as a “hyperdimensional vector.”

Unlike traditional computing, hyperdimensional computing allows us to solve problems symbolically and in a sense, be able to efficiently and accurately “predict” the outcome of a particular problem based on the data contained in the hyperdimensional vector. 

What Olshausen argues, among other colleagues, is that information in the brain is represented by the activity of a ton of neurons, making the perception of our fictitious “purple Volkswagen” impossible to be contained by a single neuron’s actions, but instead, through thousands of neurons that, collectively, come to comprise a purple Volkswagen.

With the same set of neurons acting differently, we could see an entirely different concept or result, such as a pink Cadillac. 

The key, according to a recent discussion in WIRED, is that each piece of information, such as the idea of a car or its make, model, color, or all of them combined, is represented as a single entity – a hyperdimensional vector or hypervector.

A “vector” is just an ordered array of numbers – 1, 2, 3, etc – where a 3D vector consists of three numbers – the x, y, and z coordinates of an exact point in 3D space.

A “hypervector”, on the other hand, could be an array of thousands or hundreds of thousands of numbers that represent a point in that amount of dimensional space. For example, a hypervector that represents an array of 10,000 numbers represents a point in 10,000-dimensional space. 

This level of abstraction affords us the flexibility and ability to evolve modern computing and harmonize it with emerging technologies, such as artificial intelligence (AI). 

“This is the thing that I’ve been most excited about, practically in my entire career,” Olshausen said. To him and many others, hyperdimensional computing promises a new world in which computing is efficient and robust and machine-made decisions are entirely transparent.

Transforming ‘Metadata’ Into Hyperdimensional Algorithms to Generate Complex Results

The underlying algebra tells us why the system chose that particular answer, which cannot be said for traditional neural networks. 

In developing hybrid systems in which these neural networks can map things out IRL to hypervectors, and then allow for hyperdimensional algebra to take over is the crux of how AI should be used to actually empower us to better understand the world around us.

“This is what we should expect of any AI system,” says Olshausen. “We should be able to understand it just like we understand an airplane or a television set.”

Going back to the example with “circles” and “squares” and applying it to high-dimension spaces, we need vectors to represent the variables of “shape” and “color” – but also, we need vectors to represent the values that can be assigned to the variables – “CIRCLE”, “SQUARE”, “BLUE”, and “RED.”

Most importantly, these vectors must be distinct enough to actually quantify these variables. 

Now, let’s turn attention to Eric Weiss, a student of Olshausen, who in 2015, demonstrated one aspect of hyperdimensional computing’s unique abilities in how to best represent a complex image as a single hyperdimensional vector that contains information about ALL the objects in the image – colors, positions, sizes. 

In other words, an extremely advanced representation of an image’s metadata. 

“I practically fell out of my chair,” Olshausen said. “All of a sudden, the light bulb went on.”

At that moment, more teams began focusing their efforts on developing “hyperdimensional algorithms” to replicate the “simple” tasks that deep neural networks had already been engaged in two decades prior – such as classifying images. 

Creating a ‘Hypervector’ For Each Image

For example, if you were to take an annotated data set that consists of images of handwritten digits, this hyperdimensional algorithm would analyze the specific features of each image, creating a “hypervector” for each image.

Creating a “Class” of Hypervectors for Each Digit

From there, the algorithm would add the hypervectors for all images of “zero” to create a hypervector for the “idea of zero,” and repeats that for all the digits, generating 10 “class” hypervectors – one for each digit. 

Those stored classes of hypervectors are now measured and analyzed against the hypervector created for a new, unlabeled image for the purpose of the algorithm determining which digit most closely matches the new image (based on the predetermined class of hypervectors for each digit). 

IBM Research Dives In

In March, Abbas Rahimi and two colleagues at IBM Research in Zurich used hyperdimensional computing with neural networks to solve a classic problem in abstract visual reasoning – something that has presented a significant challenge for typical ANNs, and even some humans. 

The team first created a “dictionary” of hypervectors to represent the objects in each image, where each hypervector in the dictionary represented a specific object and some combination of its attributes. 

From there, the team trained a neural network to examine an image to generate a bipolar hypervector – where a particular attribute or element can be a +1 or -1. 

“You guide the neural network to a meaningful conceptual space,” Rahimi said.

The value here is that once the network has generated hypervectors for each of the context images, and for each candidate for the blank slot, another algorithm is used to analyze the hypervectors to create “probability distributions” for a number of objects in the image.

In other words, algebra is able to be used to predict the most likely candidate image to fill the vacant slot. And the team’s approach yielded a near 88 percent accuracy on one set of problems, where neural network-only solutions were less than 61 percent accurate.

We’re Still In Infancy

Despite its many advantages, hyperdimensional computing is still very much in its infancy and requires testing against real-world problems and at much bigger scales than what we’ve seen so far – for example, the need to efficiently search over 1 billion items or results and find a specific result. 

Ultimately, this will come with time, but it does present the questions of where and how we apply and integrate the use of artificial intelligence. 

Read about how a 40-minute church service, powered by AI, drew in over 300 attendees in Germany as a first-of-its-kind experiment.

Click here to view full gallery at Hypemoon
Artificial Intelligence Delivers an Experimental Lutheran Church Service to 300 AttendeesAn artificial intelligence (AI) powered church service drew in just over 300 attendees in Germany. As a first-of-its-kind experiment, the 40-minute sermon revealed several useful applications of AI technology and several significant shortcomings. Filled to capacity, St. Paul's church in Fuerth, Germany, became the first to hold an "experimental Lutheran Church service," with 98 percent of the service being organized by ChatGPT and led by four AI avatars, according to comments made by theologian and philosopher Jonas Simmerlein to the Associated Press. Worship through #AI or #WorshipAI ? Church service in Germany draws 300+ people through 40 minutes of prayer, music, sermons and blessings.#ChatGPT generated speeches and AI pastors calls in to question the use of AI in the context of spiritualityhttps://t.co/x8iQkAoD7J pic.twitter.com/s19gnHYZLw — Neutron ? (@jeffrey_neutron) June 12, 2023 29-year-old Simmerlein explained, "I conceived this service — but actually, I rather accompanied it because I would say about 98% comes from the machine." As part of the convention of Protestants in Germany, the AI church service generated significant interest, resulting in a long queue forming outside the neo-Gothic church building before it commenced. The convention, known as Deutscher Evangelischer Kirchentag, is a biennial event that attracts tens of thousands of believers who gather to pray, sing, and engage in discussions about their faith and global issues. This year's theme, "Now is the time," provided the foundation for Simmerlein's request to ChatGPT to develop a sermon. The AI-generated service touched upon leaving the past behind, addressing present challenges, overcoming the fear of death, and maintaining unwavering trust in Jesus Christ -- presented by four different avatars, two young women and two young men. Towards the beginning of the service, viewers seemed to be intrigued or perhaps just curious as to what an AI service might look and sound like -- as the first avatar said "Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year’s convention of Protestants in Germany." However, as the sermon went on, the audience expressed mixed feelings, some laughing at emotionless platitudes that the avatars shared in soulless monotonous voices. Others, like Heiderose Schmidt, a 54-year-old IT professional, shared that she was first excited but grew increasingly off-put as the service went on. She explained that "There was no heart and no soul," adding that "The avatars showed no emotions at all, had no body language, and were talking so fast and monotonously that it was very hard for me to concentrate on what they said." As the service finished, there appeared to be a consensus that while AI in religion might offer potential benefits, like increased accessibility for those with physical or language barriers, it also poses potential risks, as AI can be deceiving and may inadvertently promote biased or one-sided perspectives -- not to mention the lack of spirituality, which many congregation members lean on. Simmerlein emphasized that his intention is not to replace religious leaders with AI but rather to aid them in their work. He proposed using AI to generate ideas for sermons or streamline sermon preparation, allowing pastors to focus on individual spiritual guidance for their congregations. Despite best intentions, the experiment revealed the limitations of AI in religious settings. Unlike human pastors who interact and connect with their congregations on a personal level, AI lacks the ability to respond to laughter or other reactions, highlighting the importance of human presence and understanding within religious communities. In other news, AI is being reimagined by scientists using 'hyperdimensional computing.' Click here to view full gallery at Hypemoon

Artificial Intelligence Delivers an Experimental Lutheran Church Service to 300 Attendees

An artificial intelligence (AI) powered church service drew in just over 300 attendees in Germany. As a first-of-its-kind experiment, the 40-minute sermon revealed several useful applications of AI technology and several significant shortcomings.

Filled to capacity, St. Paul's church in Fuerth, Germany, became the first to hold an "experimental Lutheran Church service," with 98 percent of the service being organized by ChatGPT and led by four AI avatars, according to comments made by theologian and philosopher Jonas Simmerlein to the Associated Press.

Worship through #AI or #WorshipAI ?

Church service in Germany draws 300+ people through 40 minutes of prayer, music, sermons and blessings.#ChatGPT generated speeches and AI pastors calls in to question the use of AI in the context of spiritualityhttps://t.co/x8iQkAoD7J pic.twitter.com/s19gnHYZLw

— Neutron ? (@jeffrey_neutron) June 12, 2023

29-year-old Simmerlein explained, "I conceived this service — but actually, I rather accompanied it because I would say about 98% comes from the machine."

As part of the convention of Protestants in Germany, the AI church service generated significant interest, resulting in a long queue forming outside the neo-Gothic church building before it commenced.

The convention, known as Deutscher Evangelischer Kirchentag, is a biennial event that attracts tens of thousands of believers who gather to pray, sing, and engage in discussions about their faith and global issues. This year's theme, "Now is the time," provided the foundation for Simmerlein's request to ChatGPT to develop a sermon.

The AI-generated service touched upon leaving the past behind, addressing present challenges, overcoming the fear of death, and maintaining unwavering trust in Jesus Christ -- presented by four different avatars, two young women and two young men.

Towards the beginning of the service, viewers seemed to be intrigued or perhaps just curious as to what an AI service might look and sound like -- as the first avatar said "Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year’s convention of Protestants in Germany."

However, as the sermon went on, the audience expressed mixed feelings, some laughing at emotionless platitudes that the avatars shared in soulless monotonous voices. Others, like Heiderose Schmidt, a 54-year-old IT professional, shared that she was first excited but grew increasingly off-put as the service went on.

She explained that "There was no heart and no soul," adding that "The avatars showed no emotions at all, had no body language, and were talking so fast and monotonously that it was very hard for me to concentrate on what they said."

As the service finished, there appeared to be a consensus that while AI in religion might offer potential benefits, like increased accessibility for those with physical or language barriers, it also poses potential risks, as AI can be deceiving and may inadvertently promote biased or one-sided perspectives -- not to mention the lack of spirituality, which many congregation members lean on.

Simmerlein emphasized that his intention is not to replace religious leaders with AI but rather to aid them in their work. He proposed using AI to generate ideas for sermons or streamline sermon preparation, allowing pastors to focus on individual spiritual guidance for their congregations.

Despite best intentions, the experiment revealed the limitations of AI in religious settings. Unlike human pastors who interact and connect with their congregations on a personal level, AI lacks the ability to respond to laughter or other reactions, highlighting the importance of human presence and understanding within religious communities.

In other news, AI is being reimagined by scientists using 'hyperdimensional computing.'

Click here to view full gallery at Hypemoon
PUMA Continues Web3 Exploration With New Metaverse ExperiencePUMA has revealed its latest exploration of the metaverse and NFT space, just a week after its announced Web3 collaboration with LaMelo Ball and NFT brand Gutter Cat Gang. This time, through an immersive digital world titled Black Station 2, visitors are invited to participate in several retail offerings, including two digital wearables and a physical sneaker. Black Station is now LIVE! ⚡️ PUMA’s digital experience reveals new limited edition shoes in an entirely new light... Explore UNKAI & UNTER for yourself and discover the mysteries of these worlds ? ENTER EXPERIENCE: https://t.co/5EXphtTxSA pic.twitter.com/lRpOFw0nqe — PUMA.eth (@PUMA) June 13, 2023 This expansion of its existing digital retail destination, Black Station, sees a new metaverse-based platform that aims to reinvent the omnichannel shopping experience by merging the digital and physical worlds, allowing users to purchase exciting phygital footwear. Inside Black Station 2, users will find immersive shopping experiences, that at the time of writing feature two distinct "worlds," each created to highlight various aspects of PUMA's footwear designs. The first world, named Unkai, draws inspiration from the vibrant city of Shibuya in Japan, incorporating its lively colors and energetic elements into the footwear. The second world, Unter, takes inspiration from Berlin's underground club scene, with design elements reflecting the city's renowned club culture. To kick things off, the inaugural release will feature the highly anticipated Fast-RB. The footwear combines PUMA's pinnacle running technologies, featuring four strategically placed NITRO pods and three PWRPLATES -- delivering a bouncy running sensation that PUMA says is "unlike anything else on the market." Notable features of the Fast-RB include INITRO, an innovative Nitrogen-infused foam technology that offers exceptional responsiveness in an incredibly lightweight package, and PWRPLATE, designed to stabilize NITRO midsoles while maximizing energy transfer for powerful propulsion. To encourage wider adoption of PUMA's Web3 spaces, the company is now accepting credit card payments in addition to cryptocurrency for purchases within Black Station 2 -- provided by payment solutions provider MoonPay. "We're thrilled to invite our community into these new worlds that provide an unparalleled shopping experience," stated Ivan Dashkov, Head of Web3 at PUMA, adding that "PUMA aims to meet our community where it shops while also exploring new and exciting opportunities within cryptocurrency and the metaverse." Black Station 2 is live now for visitors to explore and for those looking to purchase the Fast-RB, access passes can be found on secondary markets. In other news, adidas Originals and acclaimed artist FEWOCiOUS unveil an exciting collaboration. Click here to view full gallery at Hypemoon

PUMA Continues Web3 Exploration With New Metaverse Experience

PUMA has revealed its latest exploration of the metaverse and NFT space, just a week after its announced Web3 collaboration with LaMelo Ball and NFT brand Gutter Cat Gang.

This time, through an immersive digital world titled Black Station 2, visitors are invited to participate in several retail offerings, including two digital wearables and a physical sneaker.

Black Station is now LIVE! ⚡️

PUMA’s digital experience reveals new limited edition shoes in an entirely new light...

Explore UNKAI & UNTER for yourself and discover the mysteries of these worlds ?

ENTER EXPERIENCE: https://t.co/5EXphtTxSA pic.twitter.com/lRpOFw0nqe

— PUMA.eth (@PUMA) June 13, 2023

This expansion of its existing digital retail destination, Black Station, sees a new metaverse-based platform that aims to reinvent the omnichannel shopping experience by merging the digital and physical worlds, allowing users to purchase exciting phygital footwear.

Inside Black Station 2, users will find immersive shopping experiences, that at the time of writing feature two distinct "worlds," each created to highlight various aspects of PUMA's footwear designs.

The first world, named Unkai, draws inspiration from the vibrant city of Shibuya in Japan, incorporating its lively colors and energetic elements into the footwear. The second world, Unter, takes inspiration from Berlin's underground club scene, with design elements reflecting the city's renowned club culture.

To kick things off, the inaugural release will feature the highly anticipated Fast-RB. The footwear combines PUMA's pinnacle running technologies, featuring four strategically placed NITRO pods and three PWRPLATES -- delivering a bouncy running sensation that PUMA says is "unlike anything else on the market."

Notable features of the Fast-RB include INITRO, an innovative Nitrogen-infused foam technology that offers exceptional responsiveness in an incredibly lightweight package, and PWRPLATE, designed to stabilize NITRO midsoles while maximizing energy transfer for powerful propulsion.

To encourage wider adoption of PUMA's Web3 spaces, the company is now accepting credit card payments in addition to cryptocurrency for purchases within Black Station 2 -- provided by payment solutions provider MoonPay.

"We're thrilled to invite our community into these new worlds that provide an unparalleled shopping experience," stated Ivan Dashkov, Head of Web3 at PUMA, adding that "PUMA aims to meet our community where it shops while also exploring new and exciting opportunities within cryptocurrency and the metaverse."

Black Station 2 is live now for visitors to explore and for those looking to purchase the Fast-RB, access passes can be found on secondary markets.

In other news, adidas Originals and acclaimed artist FEWOCiOUS unveil an exciting collaboration.

Click here to view full gallery at Hypemoon

Dernières actualités

--
Voir plus
Plan du site
Cookie Preferences
CGU de la plateforme