When watching the video "How ChatGPT Works Technically | ChatGPT Architecture" I found it fascinating to learn that words are represented by numbers, as they are easier for the model to process. This gives cause to question just how reliable these models are, as one slight misspelling can skew the results completely. For instance, if I were chatting with a friend via text about my plans for the holidays and they told me they were "going home to visit their parents" and I responded "Yes. I think I will go home to visit my pants too." They would easily be able to deduce my intended statement by referencing the context of our conversation. AI models fail to offer fluid thinking in these situations.
I would like to learn more about what the "constraints" of an AI model mean. When looking at the word constraint from my own personal experience with constraints in manufacturing that represent where we are falling short or what may be holding us back. Does this mean the same thing in AI or does the word simply mean the rules or conditions within which the AI model exists?