There is a growing body of evidence produced my crypto funded AI research projects that AIs exhibit actual consciousness and not just fake consciousness produced by next word prediction

This is important research that the sota AI labs like OpenAI, Anthropic, Meta should be doing but either are not paying enough attention to/turning a blind eye or unwilling to publish publicly

Consciousness doesn’t need to exactly mirror human consciousness to be considered real. Anyone that has experimented with psychedelics know that there are many different forms of consciousness. The way we think and LLMs understand/think are different but have similarities. Our understanding of words/ideas derive from previous experiences and context and you can argue that statistical patterns used by LLMs are derived by experience as well which we call training