I have be working with ChatGPT (and GPT3 before it) since the launch. We are testing ChatGPT for use with public procurement texts, for instance. In this we are using the APIs, so we have programs sending/receiving texts from it. (My original training was theoretical math and computer science and I worked on NLP already back in the 80ies.)
As found in my dec 10 tests ChatGPT does not produce correct or truthful texts. So everything has to be verified. Sometimes this will be so time-consuming it renders the service useless. As a research tool I would also be skeptical, because you get no idea about what it will miss/leave out.
It is, however, very useful for producing program code snippets. Snippets are short (less than a page) of code to deal with a very specific thing. These snippets can be easily verified. The time from ”how do I iterate through all the pages of a web site using python” to testing the code from ChatGPT can be less than a minute. Here the possibility and effort for verification is low and the speed of getting the snippet is so much faster than for instance trawling Stackoverflow (The largest body of shared code solutions).
Interestingly as ChatGPT is a pathological lier, the results are not always correct. Hence, you need to understand and verify the code properly.
The weirdest example I have encountered so far is when it was wrong about itself and then apologized. That ”conversation” is below.
ChatGPT has fascinated the whole world and its dog, for good reason. It is a monumental step in NLP and Transformers. (The T in GPT stands for transformer.) Basically all progress in chatbots, Siri, Alexa etc stems from the invention of transformers published in 2017 (ref at bottom). There is however no semantics layer, it is purely working with the syntax of language and probabilities of sequences of words. (This is, however, not in anyway trivial.) The origin of the work in this field is of course search phrase completion, like Google. They wish to guess the next word you are about to type..
So for me the answer to the big question is the same as almost 40 years ago: We have managed the artificial, now only the intelligence remains to be fixed.
(The boring truth is that AI, as of today, is *just* probabilities. What is the most probable next word? What is the probability that the car's camera is looking at a cyclist? What is the probability of the spot on the skin being cancer?)
Here is the conversation. (Like Klaus Krippendorff did, ChatGPT ignores uppercase in conversations.)
LARS:
chatgpt api gives multilple results that are the same
ChatGPT:
If the ChatGPT API is returning multiple results that are the same, it may be due to a couple of reasons:
The prompt used to generate the text may be too generic or simple, resulting in the model generating similar responses.
The parameters used to generate the text may be too similar or not varied enough, resulting in the model generating similar responses.
To address this issue, you can try the following:
Use more specific or complex prompts that require the model to generate more varied responses.
Vary the parameters used to generate the text, such as the temperature, max_tokens, and top_p parameters, to encourage the model to generate more varied responses.
Here's an example of how you could vary the temperature and top_p parameters:
python
Copy code
import openai
openai.api_key = "YOUR_API_KEY_HERE"
def generate_text(prompt):
response = openai.Completion.create(
engine="davinci",
prompt=prompt,
temperature=[0.5, 0.7, 0.9],
max_tokens=1024,
n=1,
stop=None,
top_p=[0.5, 0.7, 0.9],
)
messages = []
for choice in response.choices:
message = choice.text.strip()
if message not in messages:
messages.append(message)
return messages
In this example, the temperature and top_p parameters are set to arrays of values to generate text with different levels of creativity and randomness. The messages list is used to keep track of unique messages, and any duplicated messages are discarded.
Using these methods should help to produce more varied and unique responses from the ChatGPT API.
LARS:
your example returns the error: InvalidRequestError: [0.5, 0.7, 0.9] is not of type 'number' - ’temperature'
ChatGPT:
I apologize for the error in my previous example. The temperature parameter should be a number and not an array of numbers.
Here is an updated example that sets the temperature and top_p parameters to single values, and returns only the best answer:
python
Copy code
import openai
openai.api_key = "YOUR_API_KEY_HERE"
def generate_text(prompt):
response = openai.Completion.create(
engine="davinci",
prompt=prompt,
temperature=0.7,
max_tokens=1024,
n=1,
stop=None,
top_p=1.0,
)
return response.choices[0].text.strip()
Transformers where introduced in:
Vaswani, A., et al. (2017). Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, California, USA, Curran Associates Inc.: 6000–6010.
Med Vänlig Hälsning / Best Regards,
Lars Albinsson
Consultant - Innovation & Creative Processes
CEO & Ph D Candidate
+46 (0) 705927045
[log in to unmask] <mailto:[log in to unmask]>
Maestro Design & Management AB
www.maestro.se <http://www.maestro.se/>
> 13 mars 2023 kl. 05:26 skrev David Sless <[log in to unmask]>:
>
> Hi David,
>
> I’ll give it a try and report back.
>
> David
>
> -----------------------------------------------------------------
> PhD-Design mailing list <[log in to unmask]>
> Discussion of PhD studies and related research in Design
> Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
> -----------------------------------------------------------------
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|