I asked GPT 4 about it and this is what it thinks:
Feeding a machine a large amount of data from the internet does not inherently produce intelligence, autonomy, or consciousness. Instead, it is the process of training the machine learning model on this data, using sophisticated algorithms, that enables it to learn patterns and generate responses resembling human-like behavior. While these models, such as ChatGPT, may exhibit intelligent behavior, they are not truly autonomous or conscious.
When we say a machine can "do anything in the domain of information," we mean that it has the ability to process, analyze, and manipulate various types of data to perform tasks or make decisions. This ability is limited by the quality and range of data the model has been trained on, as well as the model's architecture and capabilities.
Referring to something like ChatGPT as a "brain" can be a metaphorical way of describing its function, as it processes information and generates responses in a manner that may be reminiscent of human cognitive processes. However, it is important to note that this terminology can be misleading. While a model like ChatGPT can exhibit intelligent behavior, it does not possess true consciousness or self-awareness, and its functioning is fundamentally different from that of a human brain. As a result, it may be more appropriate to use terms like "model" or "algorithm" to describe these systems.