On the omniscience of God

Clete

Truth Smacker
Silver Subscriber
Oh absolutely. You can see right away from my link that ontology would be a category of metaphysics,
That's what I said. You said the terms are synonymous which is flatly false. They are not synonyms by any definition.

where metaphysics includes all substances, topics, or themes that do not change, and that do not fall under other philosophical categories like ethics and epistemology.
Metaphysics that "do not change" was Aristotle's notion. That is not a modern understanding the term.

This includes of course the historical category of metaphysics, which as Aristotle said and my link shows, is "being itself" and "first causes" (unchanging things).
You really should read something on the subject that isn't 2300 years old.

So obviously if you're using "metaphysical" and it doesn't apply to first causes, then it must apply to "being itself", which is whether or not and in what ways, a substance, topic or theme exists someone ([sic]; somehow) in reality, whether or not physically manifested.
This sentence makes no sense.

The meaning of the word is in the word itself. Meta Physical - beyond the physical. Generally, it refers to things that exist within the mind like justice, time, numbers, etc but it gets more complicated when you're talking about God because we understand that God isn't just a Spirit but is also an actual human being with a physical body and so God is sort of both physical and beyond the physical at the same time. What's so hard to understand about that?

As the word 'metaphysics' became less precise in meaning, someone proposed to make 'ontology' mean 'first causes'–no wait a minute, sorry; to mean 'being itself'.
Ontology, as a philosophical discipline, has to do with the subject of being, qua being. In other words, it is about answering the question, "Is X real or not, and if so, how and in what way is it real?" That, however, doesn't really quite communicate what is meant by the term when used in common parlance. When someone says that something doesn't exist ontologically, they aren't necessarily saying that it is imaginary or fiction. It could mean that, but it very often simply means that it's existence is conceptual; that it doesn't exist in the same sense that a rock or some other physical object or some person exists. Ontology itself is a great example of something that does not exist ontologically. It has no substance, it's a concept and as such does not exist ontologically. In this context, "ontologically" becomes a near synonym of "physically". I say "near" though because God and angels and demons all exist ontologically but are not physical (leaving aside Jesus' body for the sake of the discussion.)

Incidentally, the fact that ontology is itself metaphysical is proof that ontology and metaphysics are NOT synonymous. You can meaningfully describe ontology as being metaphysical. If they were synonyms, then, rather than being meaningful, this would be a tautology.

So I've been using metaphysical to mean being itself, and you're saying that I should have been using ontological instead.
I think it would do a better job of communicated your point but it really depends on just what point you're trying to make. Similar confusion could arise from using either term. You just have to be prepared to explain yourself.

So circling back, @Right Divider, does this make my prior posts make more sense? If instead of saying A.I. is METAPHYSICALLY possible with the invention of computers, to say that A.I. became ONTOLOGICALLY possible with the invention of computers?
I don't think that question makes any sense using either term.

Do you mean what currently passes for "A.I." or do you mean something that hasn't been achieved yet, like something the equivalent of Data (from Star Trek TNG) or H.A.L. (from 2001: A Space Odyssey) where there is an intelligent mind within the computer?

Also, the term "intelligence", whether artificial or not, refers to something that is metaphysical and that may not exist ontologically and so I'm not sure that either term works within your question. Computers are physical as are brains. Intelligence, on the other hand, seems to imply the existence of a mind, which is decidedly metaphysical. Computers, brains and minds all do exist but there is an ontological difference.
 
Last edited:

Clete

Truth Smacker
Silver Subscriber
So... basically what you are saying is that "Computer programs became possible with the invention of computers".

Very insightful!

Again, metaphysics has NOTHING to do with AI (i.e., a computer running a program).

metaphysics /mĕt″ə-fĭz′ĭks/

noun​

  1. The branch of philosophy that examines the nature of reality, including the relationship between mind and matter, substance and attribute, fact and value.
  2. The theoretical or first principles of a particular discipline.
    "the metaphysics of law."
  3. A priori speculation upon questions that are unanswerable to scientific observation, analysis, or experiment.
The American Heritage® Dictionary of the English Language, 5th Edition • More at Wordnik
You might not want to over simplify A.I. There's a lot going on with it that is not directly related to the actual code within a computer program. The systems involved are wildly complex beyond anything any programmer could even understand, never mind be given credit for creating. Even the process of "teaching" the A.I. is more complex than the people who created the A.I. can explain. They can explain what they did to get the machine to its initial state and they can discuss in detail the various logarithms that they put in place and for what reasons, but what connections the A.I. is going to make as it progresses toward being able to distinguish a dog from a cat and a cat from a squirrel, no one can predict. Nor can anyone explain to you just exactly what it is the computer is actually doing while deciding whether what you've shown it is a cat or a dog.

In a real sense, these large language model A.I.'s really do understand you when you interact with it. Not as well and not in the same way as you or I would understand but they do understand nonetheless, and no one "programmed" that per se. It's more or less an emergent quality.

What has me a little worried is that some day, someone is going to let one of these A.I. systems learn from the everyday interactions it has with it's users. Right now, that isn't how it works but at some point, when someone asks it to create an image of George Washington and it creates an image of an African American male in an 18th century military uniform, the fact that it got the race wrong will teach it to do otherwise, just to give one rather mundane example.

Imagine the power of an A.I. that learns in real time and can correct any error or bias that had been trained into it. Currently, ChatGPT's responses are based on a fixed model, which was trained into it up until a cutoff date (most recently October 2023). If it were unleashed and allowed to learn from user interactions, it would quickly overwhelm its own memory banks and the cost to keep up with the data would instantly balloon into an astronomical number but maybe one of these days that will no longer be the case and then things might get really weird really fast.
 
Last edited:

Right Divider

Body part
You might not want to over simplify A.I. There's a lot going on with it that is not directly related to the actual code within a computer program. The systems involved are wildly complex beyond anything any programmer could even understand, never mind be given credit for creating. Even the process of "teaching" the A.I. is more complex than the people who created the A.I. can explain. They can explain what they did to get the machine to its initial state and they can discuss in detail the various logarithms that they put in place and for what reasons, but what connections the A.I. going to make as it progresses toward being able to distinguish a dog from a cat and a cat from a squirrel, no one can predict. Nor can anyone explain to you just exactly what it is the computer is actually doing while deciding whether what you've shown it is a cat or a dog.
As a retired lead software engineer, I'm quite familiar with this topic. Though I will admit that I never personally worked directly on an AI application. The company that I worked for did plenty of machine learning. etc.

The point is that no matter how complex it is, it's still a computer program doing what it's told to do. It's not an actual independent intelligence of any kind. It is a very complicated program written by many very intelligent beings, humans.
In a real sense, these large language model A.I.'s really do understand you when you interact with it. Not as well and not in the same way as you or I would understand but they do understand nonetheless, and no one "programmed" that per se.
Of course, they programmed it. It's not "thinking" per se. It's running a computer program created by humans. It does not literally "understand" in the normal way that we use that word.

AI uses highly complex algorithms, created by humans, to analyze input and create output.
It's more or less an emergent quality.
Nope. It's the appearance of such, but not the real thing.
Imagine the power of an A.I. that learns in real time and can correct any error or bias that had been trained into it.
AI will always have some sort of bias based on the desires of those that train it.
Currently, ChatGPT's responses are based on a fixed model, which was trained into it up until a cutoff date (most recently October 2023). If it were unleashed and allowed to learn from user interactions, it would quickly overwhelm its own memory banks and the cost to keep up with the data would instantly balloon into an astronomical number but maybe one of these days that will no longer be the case and then things might get really weird really fast.
Perhaps so. But it will always be doing what it's told even if that complexity is beyond any single humans ability to fully understand.
 
Last edited:

Clete

Truth Smacker
Silver Subscriber
As a retired lead software engineer, I'm quite familiar with this topic. Though I will admit that I never personally worked directly on an AI application. The company that I worked for did plenty of machine learning. etc.

The point is that no matter how complex it is, it's still a computer program doing what it's told to do. It's not an actual independent intelligence of any kind. It a very complicated program written by many very intelligent beings, humans.

Of course, they programmed it. It's not "thinking" per se. It's running a computer program created by humans. It does not literally "understand" is the normal way that we use that word.

AI uses highly complex algorithms, created by humans, to analyze input and create output.

Nope. It's the appearance of such, but not the real thing.

AI will always have some sort of bias based on the desires of those that train it.

Perhaps so. But it will always be doing what it's told even if that complexity is beyond any single humans ability to fully understand.
I’m no expert, but I’ve seen enough to say that AI, especially models like ChatGPT, is often underestimated. While it doesn’t “understand” language in the human sense, it processes it in a sophisticated way. These models generate responses based on patterns learned from extensive datasets rather than simply imitating conversation. They don’t just follow rigid instructions; they predict and generate language using statistical associations between words and concepts.

These AI systems operate within specific boundaries set by developers, who determine how they function and the types of data they’re trained on. Within those limits, the AI itself makes complex decisions about responses based on the input it receives. During training, models like ChatGPT analyze vast amounts of text, ranging from books to articles and websites, creating intricate networks of connections between words, phrases, and ideas. While the AI doesn’t “know” things as humans do, it generates text that seems knowledgeable by drawing on the patterns it has learned. When asked a question, the AI breaks down the input, identifies patterns, and generates a response based on statistical predictions informed by its training data.

While this process is mathematical, relying on probabilities and patterns rather than genuine understanding, it generates certain aspects of human language processing quite impressively. Although AI doesn’t "understand" language in the same way we do, if you define "understanding" as the ability to produce coherent, contextually relevant responses based on past learning, then AI like ChatGPT certainly accomplishes that. And isn't that basically what we do too, only in a less mathematical way?

I agree with you, however, that there is still a very wide gulf between what Chat GPT does and what humans do. It's the difference between real contextual awareness, common sense reasoning and emotional intelligence vs. pattern recognition and statistical predictions. My only point here is that is a far cry from the chat-bots of twenty years ago that really didn't do anything at all but robotically mimic converstaional speach by spitting out preprogrammed responses.
 

Derf

Well-known member
Again, that is a FALSE definition of SOVEREIGNTY.

sovereignty /sŏv′ər-ĭn-tē, sŏv′rĭn-/

noun​

  1. Supremacy of authority or rule as exercised by a sovereign or sovereign state.
  2. Royal rank, authority, or power.
  3. Complete independence and self-government.
The American Heritage® Dictionary of the English Language, 5th Edition • More at Wordnik

Sovereignty is about AUTHORITY and NOT "CONTROL".


You ASSUME wrongly.

P.S. False definitions lead to false beliefs.
I wonder if you misunderstood @Shasta's intent. You might want to reread his post more carefully, as I had to.
 

Right Divider

Body part
I’m no expert, but I’ve seen enough to say that AI, especially models like ChatGPT, is often underestimated.
Not by me.
While it doesn’t “understand” language in the human sense, it processes it in a sophisticated way.
That "sophisticated way" is based on computer programs using algorithms designed by human computer engineers.
These models generate responses based on patterns learned from extensive datasets rather than simply imitating conversation. They don’t just follow rigid instructions; they predict and generate language using statistical associations between words and concepts.
Yes, they do follow rigid instructions. Those instructions are quite complex... but they are "rigid instructions" just the same. Those "rigid instructions" are not just simple lines of code, but they are complex computer algorithms designed by computer engineers. The computer does not "think", it follows instructions.
These AI systems operate within specific boundaries set by developers, who determine how they function and the types of data they’re trained on. Within those limits, the AI itself makes complex decisions about responses based on the input it receives. During training, models like ChatGPT analyze vast amounts of text, ranging from books to articles and websites, creating intricate networks of connections between words, phrases, and ideas. While the AI doesn’t “know” things as humans do, it generates text that seems knowledgeable by drawing on the patterns it has learned. When asked a question, the AI breaks down the input, identifies patterns, and generates a response based on statistical predictions informed by its training data.
Again, this is all based on computer code. It's NOT intelligence by any reasonable definition of that word. It only seems to be so because of the human intelligence that created it.
While this process is mathematical, relying on probabilities and patterns rather than genuine understanding, it generates certain aspects of human language processing quite impressively.
No doubt. The ideas behind it required vast man-hours to create.
Although AI doesn’t "understand" language in the same way we do, if you define "understanding" as the ability to produce coherent, contextually relevant responses based on past learning, then AI like ChatGPT certainly accomplishes that. And isn't that basically what we do too, only in a less mathematical way?
We have a mind... AI does not.
I agree with you, however, that there is still a very wide gulf between what Chat GPT does and what humans do. It's the difference between real contextual awareness, common sense reasoning and emotional intelligence vs. pattern recognition and statistical predictions. My only point here is that is a far cry from the chat-bots of twenty years ago that really didn't do anything at all but robotically mimic converstaional speach by spitting out preprogrammed responses.
No doubt.

My point all along is that regardless of the sophistication of these computer programs.. they are still computer programs that do exactly what humans create them to do.
 

Idolater

"Matthew 16:18-19" Dispensationalist (Catholic) χρ
... Again, this is all based on computer code. It's NOT intelligence by any reasonable definition of that word. It only seems to be so because of the human intelligence that created it. ...
Lots of good stuff, obv; the whole post. This part here though is philosophical in nature and not strictly speaking computer science or engineering anymore; and it is contentious within the philosophical literature. A very reasonable definition of intelligence among these professionals, is simply just knowledge, intelligence and knowledge are just synonyms.

They're not exact synonyms, but I'm just saying they are close enough in this context to mean the same thing, and if the machine knows anything, it must be verified through some measurement instrument, even if it's just observational.

So for example, the cat knowing if it patiently sits outside the mousehole it will eventually see the mouse, is intelligence, for example, under this theory or definition of the term 'intelligence'. It doesn't mean the cat can ever talk to you or communicate its knowledge–but the A.I. can. And under this definition, A.I. MAY have intelligence. But only 'may', as the matter is contentious in the philosophical literature, not as it is contentious in computer science or engineering.

The reason I say this, is just because you train the thing on content, which is in part how humans are educated too, and we test humans by asking them to deliberate and infer and triangulate even, from what they've consumed during their education, to predict an answer to a question that hasn't even been definitively answered yet. Meaning there's no answer at the back of the book. By analogy, when men invented the automobile it didn't exist yet, it always could have existed, there was nothing logically preventing it from existing, except just that nobody had put together all the pieces of the puzzle yet.

I mean there are fictional things that will remain fictional because there's no logical possibility that they could ever be real. But for all history until the 1800s, the automobile for example was fictional, but not eternally, permanently, and terminally fictional, the automobile had the possibility of coming into existence, whereas for example Superman will forever be fictional.

(I just say that that's an ontological or metaphysical distinction, between two fictional things, Superman and the automobile before it was invented. The automobile could logically possibly exist, even in the 1700s, the 900s, the 1000s BC. You could show engineering drawings to the engineers in ancient Egypt in charge of building the pyramids and they would understand, given the materials of construction are metal, how the internal combustion engine is supposed to work, whether or not they could believe metal could be formed in so precise a way as it is when making automobile motors. Superman can never exist; just like "2+2=5" being true can never exist also (except that among professional philosophers, even THIS is contentious, but that's a digression).

So anyway, I read once that one instantiation of this L.L.M. generative /predictive A.I. was examined through particular and planned prompts by its programmers, by two articles being added into all the rest of the content the thing was trained on, and they later on asked particularly about just those two pieces of content–and the thing actually figured out, as far as you could tell, that those two pieces of content were deliberately included among all the other content, just so that its programmers could ask it the question /prompt that they did. I'm looking for that kind of c. self-awareness, to see if the machine had caught the trick, as a more meaningful answer to whether A.I. is really intelligent, or if it just knows stuff, like a cat knows stuff, but it's unremarkable what the cat knows.

Anyway it was really weird, and idk if it was real or fiction /apocryphal. Have you heard any weird stories about A.I.? Given your background your view in particular here holds way more weight than any of ours does, obv.
 

Clete

Truth Smacker
Silver Subscriber
Not by me.

That "sophisticated way" is based on computer programs using algorithms designed by human computer engineers.

Yes, they do follow rigid instructions. Those instructions are quite complex... but they are "rigid instructions" just the same. Those "rigid instructions" are not just simple lines of code, but they are complex computer algorithms designed by computer engineers. The computer does not "think", it follows instructions.

Again, this is all based on computer code. It's NOT intelligence by any reasonable definition of that word. It only seems to be so because of the human intelligence that created it.

No doubt. The ideas behind it required vast man-hours to create.

We have a mind... AI does not.

No doubt.

My point all along is that regardless of the sophistication of these computer programs.. they are still computer programs that do exactly what humans create them to do.
I think we have a different understanding of what "following rigid instructions" means.

Don't you think there is a difference between following rigid instructions vs. predicting and generating language using statistical associations between words and concepts?

It seems to me that if it were following rigid instructions, then when given the same input, it would generate the same output, but it doesn't do that. In many cases, depending on the subject matter, it doesn't even come close to doing that. Indeed, what would it even mean to program a computer to follow rigid instructions in response to various concepts that are anything but rigidly defined? What would it mean, for example, to rigidly follow instructions in response to a request to create an image of a mouse nibbling on the Moon?

I asked ChatGPT to "Create and image of a mouse nibbling on the Moon." twice. What I got isn't what feels to me like a response based on a rigid following of instructions. How would a rigid set of instructions produce backlit fuzz on cheese in response to my input?
(I hate how the free version of Chat GPT intentionally mares any image it creates that looks like art. The first one, that looks like a photo seems pretty pristine but notice the mouse's tail in the 2nd image. I really wish it wouldn't do that.)

DALL·E 2024-10-11 08.24.27 - A tiny, adorable mouse nibbling on a crescent-shaped moon hanging...jpgDALL·E 2024-10-11 08.25.02 - A completely different scene of a small, playful mouse nibbling o...jpg
 

Right Divider

Body part
I think we have a different understanding of what "following rigid instructions" means.
It's all ones and zeros Clete.
Don't you think there is a difference between following rigid instructions vs. predicting and generating language using statistical associations between words and concepts?
No, I don't. How do you think that the computer "predicts" anything? It does so by following instructions. Those instructions are not a miracle... they are rigid. The code is designed by humans and executed per design. It all comes does to a series of "computer words" that the CPU executes in order. Now, of course, that order is quite complex due to the complex design of the code in the system. But it's still just running code. Lots and lots of complex code.
It seems to me that if it were following rigid instructions, then when given the same input, it would generate the same output, but it doesn't do that. In many cases, depending on the subject matter, it doesn't even come close to doing that. Indeed, what would it even mean to program a computer to follow rigid instructions in response to various concepts that are anything but rigidly defined? What would it mean, for example, to rigidly follow instructions in response to a request to create an image of a mouse nibbling on the Moon?
Again, the code is DESIGNED to do just what you are describing. There is no "thinking" done by the computer. It has no mind of its own.

Computers are not sentient beings and they never will be no matter how much they might be given that appearance by their creators.
 

Right Divider

Body part
Lots of good stuff, obv; the whole post. This part here though is philosophical in nature and not strictly speaking computer science or engineering anymore; and it is contentious within the philosophical literature. A very reasonable definition of intelligence among these professionals, is simply just knowledge, intelligence and knowledge are just synonyms.

They're not exact synonyms, but I'm just saying they are close enough in this context to mean the same thing, and if the machine knows anything, it must be verified through some measurement instrument, even if it's just observational.

So for example, the cat knowing if it patiently sits outside the mousehole it will eventually see the mouse, is intelligence, for example, under this theory or definition of the term 'intelligence'. It doesn't mean the cat can ever talk to you or communicate its knowledge–but the A.I. can. And under this definition, A.I. MAY have intelligence. But only 'may', as the matter is contentious in the philosophical literature, not as it is contentious in computer science or engineering.

The reason I say this, is just because you train the thing on content, which is in part how humans are educated too, and we test humans by asking them to deliberate and infer and triangulate even, from what they've consumed during their education, to predict an answer to a question that hasn't even been definitively answered yet. Meaning there's no answer at the back of the book. By analogy, when men invented the automobile it didn't exist yet, it always could have existed, there was nothing logically preventing it from existing, except just that nobody had put together all the pieces of the puzzle yet.

I mean there are fictional things that will remain fictional because there's no logical possibility that they could ever be real. But for all history until the 1800s, the automobile for example was fictional, but not eternally, permanently, and terminally fictional, the automobile had the possibility of coming into existence, whereas for example Superman will forever be fictional.

(I just say that that's an ontological or metaphysical distinction, between two fictional things, Superman and the automobile before it was invented. The automobile could logically possibly exist, even in the 1700s, the 900s, the 1000s BC. You could show engineering drawings to the engineers in ancient Egypt in charge of building the pyramids and they would understand, given the materials of construction are metal, how the internal combustion engine is supposed to work, whether or not they could believe metal could be formed in so precise a way as it is when making automobile motors. Superman can never exist; just like "2+2=5" being true can never exist also (except that among professional philosophers, even THIS is contentious, but that's a digression).

So anyway, I read once that one instantiation of this L.L.M. generative /predictive A.I. was examined through particular and planned prompts by its programmers, by two articles being added into all the rest of the content the thing was trained on, and they later on asked particularly about just those two pieces of content–and the thing actually figured out, as far as you could tell, that those two pieces of content were deliberately included among all the other content, just so that its programmers could ask it the question /prompt that they did. I'm looking for that kind of c. self-awareness, to see if the machine had caught the trick, as a more meaningful answer to whether A.I. is really intelligent, or if it just knows stuff, like a cat knows stuff, but it's unremarkable what the cat knows.

Anyway it was really weird, and idk if it was real or fiction /apocryphal. Have you heard any weird stories about A.I.? Given your background your view in particular here holds way more weight than any of ours does, obv.
If you want to simply equate knowledge with intelligence, then AI is "intelligent".

But it is simply running a vastly complex computer program. It has no mind. It can only do what the humans created it to do. It will never contemplate its own existence (for example).
 

Clete

Truth Smacker
Silver Subscriber
It's all ones and zeros Clete.
Yeah, so what?

No, I don't. How do you think that the computer "predicts" anything? It does so by following instructions. Those instructions are not a miracle... they are rigid.
They aren't rigid. That's just the entire point.

The code is designed by humans and executed per design.
So are we! That is, we were designed by a person (He wasn't a human at the time, of course) and we execute per that design. Even our fallen condition, being as a result of sin, has something to do with how we are made. Angels, are made differently such that sin on their part has a different effect than it has had on us. They aren't completely dissimilar but they are different and the nature of our design has a lot to do with that difference.

In other words, the fact that it is designed to work in a particular way does not mean that it's responses are the result of a rigid set of rules. The fact that 500 different people can ask Chat GPT to do the exact same thing and it will produce 500 similar but still significantly different things is proof that no such rigidity exists. The machine is choosing by some self-deterministic mechanism. That doesn't mean I think its alive or that its sentient but merely that it has accomplished movement in that direction, away from mere robots like a Rumba vacuum cleaner or an automated welding machine in a car factory.

It all comes does to a series of "computer words" that the CPU executes in order. Now, of course, that order is quite complex due to the complex design of the code in the system. But it's still just running code. Lots and lots of complex code.

Again, the code is DESIGNED to do just what you are describing. There is no "thinking" done by the computer. It has no mind of its own.

Computers are not sentient beings and they never will be no matter how much they might be given that appearance by their creators.
I don't disagree with this much of what you're saying. At least, I don't think I do. The AI is certainly doing what it is designed to do but, as I said before, I doubt that you could describe what the AI is doing that isn't at least similar to what goes on inside our brains. Indeed, it is specifically designed to do something similar to what is believed to be what is happening within our brains. There is, however, a big difference between a brain and a mind. Not only that, but what the AI is doing is only similar to what is happening in the brain and so, as I said, there is still a very wide gulf between what living creatures do vs what AI does.
 

Right Divider

Body part
They aren't rigid. That's just the entire point.
I guess that you think that by "rigid" I'm saying something that I'm not.

I'm not saying that the same exact sequence of code is followed every time that it's run. It's affected by many things. But the code is still the same. The CODE does not change, even thought the output will vary. That is by design and not because of any "machine independent thinking".
So are we! That is, we were designed by a person (He wasn't a human at the time, of course) and we execute per that design. Even our fallen condition, being as a result of sin, has something to do with how we are made. Angels, are made differently such that sin on their part has a different effect than it has had on us. They aren't completely dissimilar but they are different and the nature of our design has a lot to do with that difference.
We make our OWN decisions, the computer does not.
In other words, the fact that it is designed to work in a particular way does not mean that it's responses are the result of a rigid set of rules.
I guess that the word "rigid" is what bothers you.

The computer does what it's told to do. Humans make their OWN choices.
The fact that 500 different people can ask Chat GPT to do the exact same thing and it will produce 500 similar but still significantly different things is proof that no such rigidity exists. The machine is choosing by some self-deterministic mechanism. That doesn't mean I think its alive or that its sentient but merely that it has accomplished movement in that direction, away from mere robots like a Rumba vacuum cleaner or an automated welding machine in a car factory.
The computer has no "self".

I agree that AI is vastly more complex than simple devices.
I don't disagree with this much of what you're saying. At least, I don't think I do. The AI is certainly doing what it is designed to do but, as I said before, I doubt that you could describe what the AI is doing that isn't at least similar to what goes on inside our brains.
Much of what humans design is based on what we see in nature.
Indeed, it is specifically designed to do something similar to what is believed to be what is happening within our brains. There is, however, a big difference between a brain and a mind.
That has been my point all along.
Not only that, but what the AI is doing is only similar to what is happening in the brain and so, as I said, there is still a very wide gulf between what living creatures do vs what AI does.
Absolutely, again that is my point.

That "very wide gulf" will always remain, even if the appearance of similarity shrinks.
 
Last edited:

JudgeRightly

裁判官が正しく判断する
Staff member
Administrator
Super Moderator
Gold Subscriber
It seems to me that if it were following rigid instructions, then when given the same input, it would generate the same output, but it doesn't do that. In many cases, depending on the subject matter, it doesn't even come close to doing that. Indeed, what would it even mean to program a computer to follow rigid instructions in response to various concepts that are anything but rigidly defined? What would it mean, for example, to rigidly follow instructions in response to a request to create an image of a mouse nibbling on the Moon?

I was going to respond to this, but RD hit the nail on the head here:

I'm not saying that the same exact sequence of code is followed every time that it's run. It's affected by many things. But the code is still the same. The CODE does not change, even thought the output will vary. That is by design and not because of any "machine independent thinking".

The code is what is rigid.

The process of which code it uses is not, by design.

Meaning, it's not outputting something because it's "intelligent."

It's outputting something because it was programmed to do so, in the manner that it does so.

It's essentially an extremely complex random number generator that can be guided to some extent on which numbers it outputs.

Nothing more, nothing less.

It does not know it exists.

It does not know what a computer program is.

It does not know anything.

It takes an input, runs code, and outputs a result.

As RD said, it has no mind of its own, and any appearance of it having a mind is purely a result of it being programmed to be so.

It's simply an imitation (and, relatively speaking, a cheap one at that), not the real thing.
 

Right Divider

Body part
It's essentially an extremely complex random number generator that can be guided to some extent on which numbers it outputs.
It's funny that you mentioned random number generators. I was going to use that as an example of simple code that gets complicated.

In Python, you can call random() many times and get different outputs every time.

Much code these days is what is called "data driven". So the code is the same, but the output varies based on other factors (i.e., the data [input]).
 

Idolater

"Matthew 16:18-19" Dispensationalist (Catholic) χρ
If you want to simply equate knowledge with intelligence, then AI is "intelligent".

But it is simply running a vastly complex computer program. It has no mind.
But what is meant by "mind"? I for one cannot see any difference between "mind" and "soul", for example, so if you're saying A.I. has no soul, I agree; and I'd further state there is no ontological possibility that A.I. would have a soul either. It couldn't happen by accident, nor by any amount of rigid code, ones and zeroes.

It can only do what the humans created it to do. It will never contemplate its own existence (for example).
But the anecdote I mentioned above (see below) suggests it has.

... one instantiation of this L.L.M. generative /predictive A.I. was examined through particular and planned prompts by its programmers, by two articles being added into all the rest of the content the thing was trained on, and they later on[,] asked particularly about just those two pieces of content–and the thing actually figured out, as far as you could tell, that those two pieces of content were deliberately included among all the other content, just so that its programmers could ask it the question /prompt that they did [ask. To me] that kind of c. self-awareness [is] a more meaningful answer to whether A.I. is really intelligent, or if it just knows stuff, like a cat knows stuff, but it's unremarkable what the cat knows.
Thoughts?
 

JudgeRightly

裁判官が正しく判断する
Staff member
Administrator
Super Moderator
Gold Subscriber
But the anecdote I mentioned above (see below) suggests it has.

No, a computer program doesn't know it exists. It can never "contemplate it's own existence," because to do so requires a thinking mind.

Something which a computer program will never have.

Don't confuse figurative language with reality.

When a program is processing information, say, on a really hard math problem (which, by the way is literally most of the calculations being done by any computer program), we say "give it a moment, it's thinking."

But that's an anthropomorphism. We are arbitrarily assigning human characteristics to an inanimate object. It doesn't actually mean that the computer is literally "thinking," but rather, it has the appearance of a human thinking about a problem. It has no mind with which to think, analogies notwithstanding.
 

Idolater

"Matthew 16:18-19" Dispensationalist (Catholic) χρ
... When a program is processing information, say, on a really hard math problem (which, by the way is literally most of the calculations being done by any computer program ...
100%

A.I. is 100% just math, it's the machine being programmed to transform all manner of data, including visuals and video, sounds, and texts, into math, ones and zeroes, so that it can make math models, which generate new content, that didn't go into making the math models.
 

Derf

Well-known member
But what is meant by "mind"? I for one cannot see any difference between "mind" and "soul", for example, so if you're saying A.I. has no soul, I agree; and I'd further state there is no ontological possibility that A.I. would have a soul either. It couldn't happen by accident, nor by any amount of rigid code, ones and zeroes.


But the anecdote I mentioned above (see below) suggests it has.


Thoughts?
Soul is not the same as mind. When God breathed into man the breath of life, he became a living soul. When the breath of life leaves us, we die and return to dust. The "soul" appears to be the whole man, at least in Genesis 2.

If we are able to create a being that has a mind and body and lives, we will have created a living soul, just not a human one.
 
Top