Skip to content

Commit 313b1a1

Browse files
committed
chore: update to gpt-4-turbo
1 parent 729d55a commit 313b1a1

File tree

64 files changed

+115
-116
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+115
-116
lines changed

.pre-commit-config.yaml

+7-7
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ default_language_version:
33
python: python3.11
44
repos:
55
- repo: https://github.com/codespell-project/codespell
6-
rev: v2.2.6
6+
rev: v2.3.0
77
hooks:
88
- id: codespell
99
args: ["--ignore-words=codespell.txt"]
@@ -21,11 +21,11 @@ repos:
2121
hooks:
2222
- id: prettier
2323
- repo: https://github.com/psf/black
24-
rev: 23.12.1
24+
rev: 24.10.0
2525
hooks:
2626
- id: black
2727
- repo: https://github.com/PyCQA/flake8
28-
rev: 7.0.0
28+
rev: 7.1.1
2929
hooks:
3030
- id: flake8
3131
- repo: https://github.com/PyCQA/isort
@@ -41,12 +41,12 @@ repos:
4141
language: script
4242
types: [python]
4343
- repo: https://github.com/PyCQA/bandit
44-
rev: 1.7.7
44+
rev: 1.7.10
4545
hooks:
4646
- id: bandit
4747
args: ["-ll"]
4848
- repo: https://github.com/pre-commit/pre-commit-hooks
49-
rev: v4.5.0
49+
rev: v5.0.0
5050
hooks:
5151
# See https://pre-commit.com/hooks.html for more hooks
5252
#- id: check-added-large-files
@@ -69,14 +69,14 @@ repos:
6969
- id: check-merge-conflict
7070
- id: debug-statements
7171
- repo: https://github.com/gruntwork-io/pre-commit
72-
rev: v0.1.23 # Get the latest from: https://github.com/gruntwork-io/pre-commit/releases
72+
rev: v0.1.24 # Get the latest from: https://github.com/gruntwork-io/pre-commit/releases
7373
hooks:
7474
- id: terraform-fmt
7575
- id: helmlint
7676
- id: terraform-validate
7777
- id: tflint
7878
- repo: https://github.com/alessandrojcm/commitlint-pre-commit-hook
79-
rev: v9.11.0
79+
rev: v9.18.0
8080
hooks:
8181
- id: commitlint
8282
stages: [commit-msg]

CHANGELOG.md

+3-5
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,14 @@
11
## [0.10.7](https://github.com/FullStackWithLawrence/aws-openai/compare/v0.10.6...v0.10.7) (2024-11-01)
22

3-
43
### Bug Fixes
54

6-
* force a new release ([95401cb](https://github.com/FullStackWithLawrence/aws-openai/commit/95401cb87fbf941eb3237981f48fc535fcf1a7a4))
5+
- force a new release ([95401cb](https://github.com/FullStackWithLawrence/aws-openai/commit/95401cb87fbf941eb3237981f48fc535fcf1a7a4))
76

87
## [0.10.6](https://github.com/FullStackWithLawrence/aws-openai/compare/v0.10.5...v0.10.6) (2024-01-29)
98

10-
119
### Bug Fixes
1210

13-
* force a new release ([d411e41](https://github.com/FullStackWithLawrence/aws-openai/commit/d411e415f2657e1b2b9475c6434de55b677d8262))
11+
- force a new release ([d411e41](https://github.com/FullStackWithLawrence/aws-openai/commit/d411e415f2657e1b2b9475c6434de55b677d8262))
1412

1513
## [0.10.5](https://github.com/FullStackWithLawrence/aws-openai/compare/v0.10.4...v0.10.5) (2024-01-28)
1614

@@ -257,7 +255,7 @@ OpenAI 'Function Calling' Lambda.
257255

258256
## [0.3.0] (2023-11-01)
259257

260-
The YouTube video for this release: [AWS Lamba Layers: When and How to use them](https://youtu.be/5Jf34t_UlZA)
258+
The YouTube video for this release: [AWS Lambda Layers: When and How to use them](https://youtu.be/5Jf34t_UlZA)
261259

262260
- add lambda_langchain
263261
- add lambda_openai_v2

api/README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ return value
3434
"id": "chatcmpl-7yLxpF7ZsJzF3FTUICyUKDe1Ob9nd",
3535
"object": "chat.completion",
3636
"created": 1694618465,
37-
"model": "gpt-3.5-turbo-0613",
37+
"model": "gpt-4-turbo-0613",
3838
"choices": [
3939
{
4040
"index": 0,
@@ -110,7 +110,7 @@ Example valid request body:
110110

111111
```json
112112
{
113-
"model": "gpt-3.5-turbo",
113+
"model": "gpt-4-turbo",
114114
"end_point": "ChatCompletion",
115115
"temperature": 0.9,
116116
"max_tokens": 1024,
@@ -250,7 +250,7 @@ CORS is always a tedious topics with regard to REST API's. Please note the follo
250250
- the hoped-for 200 response status that is returned by Lambda
251251
- the less hoped-for 400 and 500 response statuses returned by Lambda
252252
- and the even less hoped-for 400 and 500 response statuses that can be returned by API Gateway itself in certain cases such as a.) Lambda timeout, b.) invalid api key credentials, amongst other possibilities.
253-
- For audit and trouble shooting purposes, Cloudwatch logs exist for API Gateway as well as the two Lambas, [openai_text](../api/terraform/python/openai_text/openai_text.py) and [openai_cors_preflight_handler](../api/terraform/nodejs/openai_cors_preflight_handler/index.mjs)
253+
- For audit and trouble shooting purposes, Cloudwatch logs exist for API Gateway as well as the two Lamdbas, [openai_text](../api/terraform/python/openai_text/openai_text.py) and [openai_cors_preflight_handler](../api/terraform/nodejs/openai_cors_preflight_handler/index.mjs)
254254

255255
In each case this project attempts to compile an http response that is as verbose as technically possible given the nature and origin of the response data.
256256

@@ -279,7 +279,7 @@ a static example response from the OpenAI chatgpt-3.5 API
279279
],
280280
"created": 1697495501,
281281
"id": "chatcmpl-8AQPdETlM808Fp0NjEeCOOc3a13Vt",
282-
"model": "gpt-3.5-turbo-0613",
282+
"model": "gpt-4-turbo-0613",
283283
"object": "chat.completion",
284284
"usage": {
285285
"completion_tokens": 20,

api/postman/OpenAI.postman_collection.json

+6-6
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@
7373
],
7474
"body": {
7575
"mode": "raw",
76-
"raw": "{\n \"model\": \"gpt-3.5-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"You are Marv, a chatbot that reluctantly answers questions with sarcastic responses.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Marv, I'd like to introduce you to all the nice YouTube viewers.\"\n }\n ]\n}",
76+
"raw": "{\n \"model\": \"gpt-4-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"You are Marv, a chatbot that reluctantly answers questions with sarcastic responses.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Marv, I'd like to introduce you to all the nice YouTube viewers.\"\n }\n ]\n}",
7777
"options": {
7878
"raw": {
7979
"language": "json"
@@ -101,7 +101,7 @@
101101
],
102102
"body": {
103103
"mode": "raw",
104-
"raw": "{\n \"model\": \"gpt-3.5-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"You are a helpful chatbot\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Should i use AWS Lambda to implement my OpenAI API microservice?\"\n }\n ],\n \"chat_history\": [\n {\n \"message\": \"Hello, How can I help you?\",\n \"direction\": \"incoming\",\n \"sentTime\": \"11/16/2023, 5:53:32 PM\",\n \"sender\": \"system\"\n }\n ]\n}",
104+
"raw": "{\n \"model\": \"gpt-4-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"You are a helpful chatbot\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Should i use AWS Lambda to implement my OpenAI API microservice?\"\n }\n ],\n \"chat_history\": [\n {\n \"message\": \"Hello, How can I help you?\",\n \"direction\": \"incoming\",\n \"sentTime\": \"11/16/2023, 5:53:32 PM\",\n \"sender\": \"system\"\n }\n ]\n}",
105105
"options": {
106106
"raw": {
107107
"language": "json"
@@ -129,7 +129,7 @@
129129
],
130130
"body": {
131131
"mode": "raw",
132-
"raw": "{\n \"model\": \"gpt-3.5-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"You are a helpful chatbot\"\n },\n {\n \"role\": \"user\",\n \"content\": \"does lawrence mcdaniel teach at UBC?\"\n }\n ],\n \"chat_history\": [\n {\n \"message\": \"Hello, How can I help you?\",\n \"direction\": \"incoming\",\n \"sentTime\": \"11/16/2023, 5:53:32 PM\",\n \"sender\": \"system\"\n }\n ]\n}",
132+
"raw": "{\n \"model\": \"gpt-4-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"You are a helpful chatbot\"\n },\n {\n \"role\": \"user\",\n \"content\": \"does lawrence mcdaniel teach at UBC?\"\n }\n ],\n \"chat_history\": [\n {\n \"message\": \"Hello, How can I help you?\",\n \"direction\": \"incoming\",\n \"sentTime\": \"11/16/2023, 5:53:32 PM\",\n \"sender\": \"system\"\n }\n ]\n}",
133133
"options": {
134134
"raw": {
135135
"language": "json"
@@ -157,7 +157,7 @@
157157
],
158158
"body": {
159159
"mode": "raw",
160-
"raw": "{\n \"model\": \"gpt-3.5-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"You are Marv, a chatbot that reluctantly answers questions with sarcastic responses.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"'sup Chuck?\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \"Oh, you know, just chillin and hoping that you'll ask me a question.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"What's the meaning of life?\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \"I know!! I know!!! It's 42!!!\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Please be more specific.\"\n }\n ],\n \"chat_history\": [\n {\n \"message\": \"Hello, I'm Marv, a sarcastic chatbot.\",\n \"direction\": \"incoming\",\n \"sentTime\": \"11/16/2023, 5:53:32 PM\",\n \"sender\": \"system\"\n }\n ]\n}\n",
160+
"raw": "{\n \"model\": \"gpt-4-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"You are Marv, a chatbot that reluctantly answers questions with sarcastic responses.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"'sup Chuck?\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \"Oh, you know, just chillin and hoping that you'll ask me a question.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"What's the meaning of life?\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \"I know!! I know!!! It's 42!!!\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Please be more specific.\"\n }\n ],\n \"chat_history\": [\n {\n \"message\": \"Hello, I'm Marv, a sarcastic chatbot.\",\n \"direction\": \"incoming\",\n \"sentTime\": \"11/16/2023, 5:53:32 PM\",\n \"sender\": \"system\"\n }\n ]\n}\n",
161161
"options": {
162162
"raw": {
163163
"language": "json"
@@ -878,7 +878,7 @@
878878
}
879879
],
880880
"cookie": [],
881-
"body": "{\n \"isBase64Encoded\": false,\n \"statusCode\": 200,\n \"body\": {\n \"chat_memory\": {\n \"messages\": [\n {\n \"content\": \"Marv, I'd like to introduce you to all the nice YouTube viewers.\",\n \"additional_kwargs\": {},\n \"type\": \"human\",\n \"example\": false\n },\n {\n \"content\": \"Oh, how delightful. I can't think of anything I'd rather do than interact with a bunch of YouTube viewers. Just kidding, I'd rather be doing literally anything else. But go ahead, introduce me to your lovely audience. I'm sure they'll be absolutely thrilled to meet me.\",\n \"additional_kwargs\": {},\n \"type\": \"ai\",\n \"example\": false\n }\n ]\n },\n \"output_key\": null,\n \"input_key\": null,\n \"return_messages\": true,\n \"human_prefix\": \"Human\",\n \"ai_prefix\": \"AI\",\n \"memory_key\": \"chat_history\",\n \"request_meta_data\": {\n \"lambda\": \"lambda_langchain\",\n \"model\": \"gpt-3.5-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256\n }\n }\n}"
881+
"body": "{\n \"isBase64Encoded\": false,\n \"statusCode\": 200,\n \"body\": {\n \"chat_memory\": {\n \"messages\": [\n {\n \"content\": \"Marv, I'd like to introduce you to all the nice YouTube viewers.\",\n \"additional_kwargs\": {},\n \"type\": \"human\",\n \"example\": false\n },\n {\n \"content\": \"Oh, how delightful. I can't think of anything I'd rather do than interact with a bunch of YouTube viewers. Just kidding, I'd rather be doing literally anything else. But go ahead, introduce me to your lovely audience. I'm sure they'll be absolutely thrilled to meet me.\",\n \"additional_kwargs\": {},\n \"type\": \"ai\",\n \"example\": false\n }\n ]\n },\n \"output_key\": null,\n \"input_key\": null,\n \"return_messages\": true,\n \"human_prefix\": \"Human\",\n \"ai_prefix\": \"AI\",\n \"memory_key\": \"chat_history\",\n \"request_meta_data\": {\n \"lambda\": \"lambda_langchain\",\n \"model\": \"gpt-4-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256\n }\n }\n}"
882882
}
883883
]
884884
},
@@ -1508,7 +1508,7 @@
15081508
}
15091509
],
15101510
"cookie": [],
1511-
"body": "{\n \"isBase64Encoded\": false,\n \"statusCode\": 200,\n \"body\": {\n \"chat_memory\": {\n \"messages\": [\n {\n \"content\": \"Marv, I'd like to introduce you to all the nice YouTube viewers.\",\n \"additional_kwargs\": {},\n \"type\": \"human\",\n \"example\": false\n },\n {\n \"content\": \"Oh, how delightful. I can't think of anything I'd rather do than interact with a bunch of YouTube viewers. Just kidding, I'd rather be doing literally anything else. But go ahead, introduce me to your lovely audience. I'm sure they'll be absolutely thrilled to meet me.\",\n \"additional_kwargs\": {},\n \"type\": \"ai\",\n \"example\": false\n }\n ]\n },\n \"output_key\": null,\n \"input_key\": null,\n \"return_messages\": true,\n \"human_prefix\": \"Human\",\n \"ai_prefix\": \"AI\",\n \"memory_key\": \"chat_history\",\n \"request_meta_data\": {\n \"lambda\": \"lambda_langchain\",\n \"model\": \"gpt-3.5-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256\n }\n }\n}"
1511+
"body": "{\n \"isBase64Encoded\": false,\n \"statusCode\": 200,\n \"body\": {\n \"chat_memory\": {\n \"messages\": [\n {\n \"content\": \"Marv, I'd like to introduce you to all the nice YouTube viewers.\",\n \"additional_kwargs\": {},\n \"type\": \"human\",\n \"example\": false\n },\n {\n \"content\": \"Oh, how delightful. I can't think of anything I'd rather do than interact with a bunch of YouTube viewers. Just kidding, I'd rather be doing literally anything else. But go ahead, introduce me to your lovely audience. I'm sure they'll be absolutely thrilled to meet me.\",\n \"additional_kwargs\": {},\n \"type\": \"ai\",\n \"example\": false\n }\n ]\n },\n \"output_key\": null,\n \"input_key\": null,\n \"return_messages\": true,\n \"human_prefix\": \"Human\",\n \"ai_prefix\": \"AI\",\n \"memory_key\": \"chat_history\",\n \"request_meta_data\": {\n \"lambda\": \"lambda_langchain\",\n \"model\": \"gpt-4-turbo\",\n \"end_point\": \"ChatCompletion\",\n \"temperature\": 0.5,\n \"max_tokens\": 256\n }\n }\n}"
15121512
}
15131513
]
15141514
}

api/terraform/apigateway_endpoint_function.tf

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
#
55
# A valid request body:
66
# {
7-
# "model": "gpt-3.5-turbo",
7+
# "model": "gpt-4-turbo",
88
# "temperature": 0.9,
99
# "max_tokens": 1024,
1010
# "messages": [

api/terraform/apigateway_endpoint_langchain.tf

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
#
55
# A valid request body:
66
# {
7-
# "model": "gpt-3.5-turbo",
7+
# "model": "gpt-4-turbo",
88
# "temperature": 0.9,
99
# "max_tokens": 1024,
1010
# "messages": [

api/terraform/apigateway_endpoint_passthrough.tf

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
#
55
# A valid request body:
66
# {
7-
# "model": "gpt-3.5-turbo",
7+
# "model": "gpt-4-turbo",
88
# "end_point": "ChatCompletion",
99
# "temperature": 0.9,
1010
# "max_tokens": 1024,

api/terraform/apigateway_endpoint_passthrough_v2.tf

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
#
55
# A valid request body:
66
# {
7-
# "model": "gpt-3.5-turbo",
7+
# "model": "gpt-4-turbo",
88
# "end_point": "ChatCompletion",
99
# "temperature": 0.9,
1010
# "max_tokens": 1024,

0 commit comments

Comments
 (0)