Skip to content

Commit 4e93c39

Browse files
authored
Merge pull request #5 from alexrudall/improve_documentation
Improve documentation
2 parents 4bd22cd + bf9a5f7 commit 4e93c39

File tree

10 files changed

+300
-13
lines changed

10 files changed

+300
-13
lines changed

.github/ISSUE_TEMPLATE/bug_report.md

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
---
2+
name: Bug report
3+
about: Create a report to help us improve
4+
title: ''
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
**Describe the bug**
11+
A clear and concise description of what the bug is.
12+
13+
**To Reproduce**
14+
Steps to reproduce the behavior:
15+
1. Go to '...'
16+
2. Click on '....'
17+
3. Scroll down to '....'
18+
4. See error
19+
20+
**Expected behavior**
21+
A clear and concise description of what you expected to happen.
22+
23+
**Screenshots**
24+
If applicable, add screenshots to help explain your problem.
25+
26+
**Desktop (please complete the following information):**
27+
- OS: [e.g. iOS]
28+
- Browser [e.g. chrome, safari]
29+
- Version [e.g. 22]
30+
31+
**Smartphone (please complete the following information):**
32+
- Device: [e.g. iPhone6]
33+
- OS: [e.g. iOS8.1]
34+
- Browser [e.g. stock browser, safari]
35+
- Version [e.g. 22]
36+
37+
**Additional context**
38+
Add any other context about the problem here.
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
---
2+
name: Feature request
3+
about: Suggest an idea for this project
4+
title: ''
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
**Is your feature request related to a problem? Please describe.**
11+
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12+
13+
**Describe the solution you'd like**
14+
A clear and concise description of what you want to happen.
15+
16+
**Describe alternatives you've considered**
17+
A clear and concise description of any alternative solutions or features you've considered.
18+
19+
**Additional context**
20+
Add any other context or screenshots about the feature request here.

CHANGELOG.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.1.2] - 2020-09-09
11+
12+
### Added
13+
14+
- Add tests and cached responses for the different engines.
15+
- Add issue templates.
16+
17+
### Changed
18+
19+
- Add README instructions for using the gem without dotenv.
20+
- Add list of engines to README.
21+
1022
## [0.1.1] - 2020-09-08
1123

1224
### Added

Gemfile.lock

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
PATH
22
remote: .
33
specs:
4-
ruby-openai (0.1.1)
4+
ruby-openai (0.1.2)
55
dotenv (~> 2.7.6)
66
httparty (~> 0.18.1)
77

README.md

Lines changed: 17 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,23 +26,36 @@ Or install it yourself as:
2626

2727
Get your API key from [https://beta.openai.com/docs/developer-quickstart/your-api-keys](https://beta.openai.com/docs/developer-quickstart/your-api-keys)
2828

29-
Add your secret key to your .env file:
29+
### With dotenv
30+
31+
If you're using [dotenv](https://github.com/motdotla/dotenv), you can add your secret key to your .env file:
3032

3133
```
3234
OPENAI_ACCESS_TOKEN=secretkeygoeshere
3335
```
3436

35-
Create a client:
37+
And create a client:
3638

3739
```
3840
client = OpenAI::Client.new
3941
```
4042

41-
Use it to hit the OpenAI API for a completion:
43+
### Without dotenv
44+
45+
Alternativeely you can pass your key directly to a new client:
46+
47+
```
48+
client = OpenAI::Client.new(access_token: "access_token_goes_here")
49+
```
50+
51+
### Get a response
52+
53+
The engine options are currently "ada", "babbage", "curie" and "davinci". Hit the OpenAI API for a completion:
4254

4355
```
4456
response = client.call(engine: "davinci", prompt: "Once upon a time", max_tokens: 5)
45-
response.parsed_response['choices'].map{ |c| c["text"] }
57+
puts response.parsed_response['choices'].map{ |c| c["text"] }
58+
=> [", there lived a great"]
4659
```
4760

4861
## Development

lib/ruby/openai/version.rb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
module Ruby
22
module OpenAI
3-
VERSION = "0.1.1".freeze
3+
VERSION = "0.1.2".freeze
44
end
55
end

spec/fixtures/cassettes/ada_Once_upon_a_time_5.yml

Lines changed: 55 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

spec/fixtures/cassettes/babbage_Once_upon_a_time_5.yml

Lines changed: 55 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

spec/fixtures/cassettes/curie_Once_upon_a_time_5.yml

Lines changed: 55 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

spec/ruby/openai/client_spec.rb

Lines changed: 46 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,16 +3,55 @@
33
expect { OpenAI::Client.new }.not_to raise_error
44
end
55

6-
context "with an engine, prompt and max_tokens", :vcr do
7-
let(:engine) { "davinci" }
6+
context "with a prompt and max_tokens", :vcr do
87
let(:prompt) { "Once upon a time" }
98
let(:max_tokens) { 5 }
109

11-
it "can make a request to the OpenAI API" do
12-
VCR.use_cassette("#{engine} #{prompt} #{max_tokens}") do
13-
response = OpenAI::Client.new.call(engine: engine, prompt: prompt, max_tokens: max_tokens)
14-
text = JSON.parse(response.body)["choices"].first["text"]
15-
expect(text.split(" ").empty?).to eq(false)
10+
context "with engine: ada" do
11+
let(:engine) { "ada" }
12+
13+
it "can make a request to the OpenAI API" do
14+
VCR.use_cassette("#{engine} #{prompt} #{max_tokens}") do
15+
response = OpenAI::Client.new.call(engine: engine, prompt: prompt, max_tokens: max_tokens)
16+
text = JSON.parse(response.body)["choices"].first["text"]
17+
expect(text.split(" ").empty?).to eq(false)
18+
end
19+
end
20+
end
21+
22+
context "with engine: babbage" do
23+
let(:engine) { "babbage" }
24+
25+
it "can make a request to the OpenAI API" do
26+
VCR.use_cassette("#{engine} #{prompt} #{max_tokens}") do
27+
response = OpenAI::Client.new.call(engine: engine, prompt: prompt, max_tokens: max_tokens)
28+
text = JSON.parse(response.body)["choices"].first["text"]
29+
expect(text.split(" ").empty?).to eq(false)
30+
end
31+
end
32+
end
33+
34+
context "with engine: curie" do
35+
let(:engine) { "curie" }
36+
37+
it "can make a request to the OpenAI API" do
38+
VCR.use_cassette("#{engine} #{prompt} #{max_tokens}") do
39+
response = OpenAI::Client.new.call(engine: engine, prompt: prompt, max_tokens: max_tokens)
40+
text = JSON.parse(response.body)["choices"].first["text"]
41+
expect(text.split(" ").empty?).to eq(false)
42+
end
43+
end
44+
end
45+
46+
context "with engine: davinci" do
47+
let(:engine) { "davinci" }
48+
49+
it "can make a request to the OpenAI API" do
50+
VCR.use_cassette("#{engine} #{prompt} #{max_tokens}") do
51+
response = OpenAI::Client.new.call(engine: engine, prompt: prompt, max_tokens: max_tokens)
52+
text = JSON.parse(response.body)["choices"].first["text"]
53+
expect(text.split(" ").empty?).to eq(false)
54+
end
1655
end
1756
end
1857
end

0 commit comments

Comments
 (0)