Yeah bro's what is working on chatgpt
This seems every day to become a more and more important question, as there are countless proofs of Chatgpt not being able to perform tasks, which a high school student could solve, by simple thinking.
It starts with the basic logic of counting r in strawberry and ends in using it professionally. So this post we will discuss what is working and what is not.
It's important to understand that ChatGPT is not a miracle ai and most likely won't be any time soon, rather it's a tool which helps you with your work if you have only basic knowhow about the topic, but it's exactly where we're the biggest problem of ChatGPT lies.
You can't actually use llm (Large Language Models) for applied logic, it's not able to do so. Be it simple math or counting letters, this is because ChatGPT doesn't actually do the calculation on its own.
It does so by calling different providers like Wolfgram or dalle3, this is a huge problem because more providers means more places where it can fail, you see this on countless posts.
It's also not great for coding, because openAI tries to censor hacking, but not only hacking but also the defence is censored and banned on the platform, so it just doesn't do anything related to security.
The language it generates is a statistical operation which is not made to find the best solution, it's made to find any solution which is allowed by openAI.
So we have a company which will censor a lot of work, we have providers who make errors and we have absolutely no validation of the stuff it says.
It's also not localizing its users, rather (copilot at least) is remembering it from past conversations. So it can't actually say what you want.
To actually get the results you need, then you will need to invest a lot of time poking around the answer and do what we call prompt engineering which kinda feels like looking at vomit and hoping it becomes gold magically.
I try to test all AIs and let's just say the other are not better.
copilot which is mostly a customized ChatGPT is performing as bad as ChatGPT and gives some problems the same answer as GPT.
This has gotten so far that I'm now only asking in FB Groups what is working and letting the members describe what they are using and arguing with them why this is a bad idea.
Here is an example:
So we have a user saying it did write a program for him, and this is great because it gives the option for users who can't program to get the apps they want customized the way they want and I love that idea.
However, this is where it fails.
a. No Security
b. No Public facing applications
c. There is a huge chance that you spend another 3 hours debugging whatever GPT created for you. (yes it posts code with bugs)
And it's not only on the code.
The errors are in every aspect of the application.
So here is the problem, why would i like to debug anothers programmers code, which doesnt follow applied logic?
If you find any working features please comment and let's discuss it.
Kommentare
Kommentar veröffentlichen