Perchance just lets you interact with an LLM. Which, simplified, is just predictive text / autocomplete on steroids.
The model sees a wall of text (your conversation history, whatever you put in the settings, character fields etc) and evaluates what the most probable next words / half-words / characters might be, based on the gazillion texts it was trained on.
It does not have sources to show you, or a list of URLs to give you to explain how it came up with stuff.
If you ask for a link (i.e. the wall of text ends with a request for a link), it will continue the text by writing something that looks reasonably like a URL that fits the preceeding bits of text.
Much of what the model generates may be factually correct if the training set included enough of that kind of information (making it so “yes” seems like the most probable continuation to a text ending in “does the earth go around the sun?”) but you’ll always have to verify what an LLM tells you because most models (even the expensive ones like ChatGPT or Claude) will just as confidently state complete bullshit and pass it as truth. After all, you’re just getting the statistically most plausible text to continue what came before. There’s no “awareness” of knowledge or facts, just statistics.
You most likely also don’t have a model with “internet access” unless you added some custom code. And even then, that usually only grabs a URL from your message and puts the text from that page into the wall of text (not youtube videos, unless you link directly to a transcription) alongside your conversation history before making the model generates the next bit of text.
Searching the web would require you again to add custom code and to set up access to an API that allows searching the web.
Perchance just lets you interact with an LLM. Which, simplified, is just predictive text / autocomplete on steroids. The model sees a wall of text (your conversation history, whatever you put in the settings, character fields etc) and evaluates what the most probable next words / half-words / characters might be, based on the gazillion texts it was trained on.
It does not have sources to show you, or a list of URLs to give you to explain how it came up with stuff.
If you ask for a link (i.e. the wall of text ends with a request for a link), it will continue the text by writing something that looks reasonably like a URL that fits the preceeding bits of text.
Much of what the model generates may be factually correct if the training set included enough of that kind of information (making it so “yes” seems like the most probable continuation to a text ending in “does the earth go around the sun?”) but you’ll always have to verify what an LLM tells you because most models (even the expensive ones like ChatGPT or Claude) will just as confidently state complete bullshit and pass it as truth. After all, you’re just getting the statistically most plausible text to continue what came before. There’s no “awareness” of knowledge or facts, just statistics.
You most likely also don’t have a model with “internet access” unless you added some custom code. And even then, that usually only grabs a URL from your message and puts the text from that page into the wall of text (not youtube videos, unless you link directly to a transcription) alongside your conversation history before making the model generates the next bit of text. Searching the web would require you again to add custom code and to set up access to an API that allows searching the web.