Skip to content

Commit 34e8bc0

Browse files
committed
[1.1.3]
Signed-off-by: OEvortex <abhat8283@gmail.com>
1 parent 149999c commit 34e8bc0

File tree

11 files changed

+717
-128
lines changed

11 files changed

+717
-128
lines changed

CHANGELOG.md

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
# Changelog
2+
3+
All notable changes to the HelpingAI Python SDK will be documented in this file.
4+
5+
## [1.1.3] - 2025-07-18
6+
7+
### Added
8+
- **🔧 Tool Calling Framework**: New [`@tools decorator`](HelpingAI/tools/core.py:144) for effortless tool creation
9+
- **🤖 Automatic Schema Generation**: Type hint-based JSON schema creation with docstring parsing
10+
- **📝 Smart Documentation**: Multi-format docstring parsing (Google, Sphinx, NumPy styles)
11+
- **🧠 Thread-Safe Tool Registry**: Reliable tool management in multi-threaded environments
12+
- **🔍 Tool Validation**: Automatic parameter validation against JSON schema
13+
- **Extended Python Support**: Now supports Python 3.7-3.14
14+
- **Streaming Support**: Real-time response streaming
15+
- **Advanced Filtering**: Hide reasoning blocks with `hide_think` parameter
16+
- New comprehensive [Tool Calling Guide](docs/tool_calling.md)
17+
18+
### Changed
19+
- **🔄 Universal Compatibility**: Seamless integration with existing OpenAI-format tools
20+
- **Updated Models**: Support for latest models (Dhanishtha-2.0-preview, Dhanishtha-2.0-preview-mini)
21+
- **Improved Model Management**: Better fallback handling and detailed model descriptions
22+
- Deprecated `get_tools_format()` in favor of `get_tools()`
23+
- Updated documentation to reflect current model names and best practices
24+
25+
### Enhanced
26+
- **🛡️ Enhanced Tool Error Handling**: Comprehensive exception types for tool operations
27+
- **Dhanishtha-2.0 Integration**: World's first intermediate thinking model with multi-phase reasoning
28+
- **Dhanishtha Models**: Advanced reasoning capabilities with transparent thinking processes
29+
- **OpenAI-Compatible Interface**: Familiar API design
30+
- **Enhanced Error Handling**: Comprehensive exception types
31+
32+
## [1.1.2] - 2025-06-15
33+
34+
### Added
35+
- Support for Dhanishtha-2.0-preview model
36+
- Improved error handling for API requests
37+
- Enhanced streaming capabilities
38+
39+
### Fixed
40+
- Various bug fixes and performance improvements
41+
42+
## [1.1.1] - 2025-05-20
43+
44+
### Added
45+
- Initial support for tool calling
46+
- Enhanced type hints for better IDE support
47+
48+
### Fixed
49+
- Connection handling for unstable networks
50+
- Token counting accuracy
51+
52+
## [1.1.0] - 2025-04-10
53+
54+
### Added
55+
- Initial public release
56+
- Support for chat completions
57+
- Basic streaming functionality
58+
- Error handling framework
59+
60+
---
61+
62+
For more details, see the [documentation](docs/) or [GitHub repository](https://github.com/HelpingAI/HelpingAI-python).

HelpingAI/tools/__init__.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,12 @@
33
44
This module provides decorators and utilities for creating standard
55
tool definitions from Python functions with minimal boilerplate.
6+
7+
Key components:
8+
- @tools decorator: Transform Python functions into AI-callable tools
9+
- Fn class: Represent callable functions with metadata
10+
- get_tools(): Get registered tools (preferred over get_tools_format)
11+
- get_registry(): Access the tool registry for advanced management
612
"""
713

814
from .core import Fn, tools, get_tools, get_tools_format, clear_registry, get_registry
@@ -23,7 +29,7 @@
2329
get_compatibility_warnings
2430
)
2531

26-
__version__ = "1.1.0"
32+
__version__ = "1.1.3"
2733

2834
__all__ = [
2935
# Core classes and functions

HelpingAI/tools/compatibility.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -228,7 +228,7 @@ def is_fn_object(obj: Any) -> bool:
228228
Returns:
229229
True if object is an Fn instance
230230
"""
231-
return hasattr(obj, 'to_openai_tool') and hasattr(obj, 'call')
231+
return hasattr(obj, 'to_tool_format') and hasattr(obj, 'call')
232232

233233

234234
def normalize_tool_choice(

HelpingAI/tools/core.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -195,16 +195,22 @@ def get_tools(names: List[str] = None) -> List[Fn]:
195195
return _get_global_registry().get_tools(names)
196196

197197

198-
def get_tools_format(names: List[str] = None) -> List[Dict[str, Any]]:
198+
def get_tools_format(names: List[str] = None, category: str = None) -> List[Dict[str, Any]]:
199199
"""Get tools in standard tool format.
200200
201201
Args:
202202
names: Specific tool names to retrieve
203+
category: Category name to filter tools (optional)
203204
204205
Returns:
205206
List of tool definitions in standard format
206207
"""
207208
tools_list = get_tools(names)
209+
210+
# Filter by category if provided
211+
if category and hasattr(tools_list[0], 'category'):
212+
tools_list = [tool for tool in tools_list if tool.category == category]
213+
208214
return [tool.to_tool_format() for tool in tools_list]
209215

210216

HelpingAI/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
"""Version information."""
2-
VERSION = "1.1.2"
2+
VERSION = "1.1.3"

README.md

Lines changed: 16 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ hai = HAI()
5252

5353
# Create a chat completion
5454
response = hai.chat.completions.create(
55-
model="Helpingai3-raw",
55+
model="Dhanishtha-2.0-preview",
5656
messages=[
5757
{"role": "system", "content": "You are an expert in emotional intelligence."},
5858
{"role": "user", "content": "What makes a good leader?"}
@@ -67,7 +67,7 @@ print(response.choices[0].message.content)
6767
```python
6868
# Stream responses in real-time
6969
for chunk in hai.chat.completions.create(
70-
model="Helpingai3-raw",
70+
model="Dhanishtha-2.0-preview",
7171
messages=[{"role": "user", "content": "Tell me about empathy"}],
7272
stream=True
7373
):
@@ -113,7 +113,7 @@ def make_completion_with_retry(messages, max_retries=3):
113113
for attempt in range(max_retries):
114114
try:
115115
return hai.chat.completions.create(
116-
model="Helpingai3-raw",
116+
model="Dhanishtha-2.0-preview",
117117
messages=messages
118118
)
119119
except RateLimitError as e:
@@ -130,24 +130,24 @@ def make_completion_with_retry(messages, max_retries=3):
130130

131131
## 🤖 Available Models
132132

133-
### Helpingai3-raw
134-
- **Advanced Emotional Intelligence**: Enhanced emotional understanding and contextual awareness
135-
- **Training Data**: 15M emotional dialogues, 3M therapeutic exchanges, 250K cultural conversations, 1M crisis response scenarios
136-
- **Best For**: AI companionship, emotional support, therapy guidance, personalized learning
137-
138133
### Dhanishtha-2.0-preview
139134
- **World's First Intermediate Thinking Model**: Multi-phase reasoning with self-correction capabilities
140135
- **Unique Features**: `<think>...</think>` blocks for transparent reasoning, structured emotional reasoning (SER)
141136
- **Best For**: Complex problem-solving, analytical tasks, educational content, reasoning-heavy applications
142137

138+
### Dhanishtha-2.0-preview-mini
139+
- **Lightweight Reasoning Model**: Efficient version of Dhanishtha-2.0-preview
140+
- **Unique Features**: Same reasoning capabilities in a more compact model
141+
- **Best For**: Faster responses, mobile applications, resource-constrained environments
142+
143143
```python
144144
# List all available models
145145
models = hai.models.list()
146146
for model in models:
147147
print(f"Model: {model.id} - {model.description}")
148148

149149
# Get specific model info
150-
model = hai.models.retrieve("Helpingai3-raw")
150+
model = hai.models.retrieve("Dhanishtha-2.0-preview")
151151
print(f"Model: {model.name}")
152152

153153
# Use Dhanishtha-2.0 for complex reasoning
@@ -165,7 +165,7 @@ Transform any Python function into a powerful AI tool with zero boilerplate usin
165165

166166
```python
167167
from HelpingAI import HAI
168-
from HelpingAI.tools import tools, get_tools_format
168+
from HelpingAI.tools import tools, get_tools
169169

170170
@tools
171171
def get_weather(city: str, units: str = "celsius") -> str:
@@ -193,9 +193,9 @@ def calculate_tip(bill_amount: float, tip_percentage: float = 15.0) -> dict:
193193
# Use with chat completions
194194
hai = HAI()
195195
response = hai.chat.completions.create(
196-
model="Helpingai3-raw",
196+
model="Dhanishtha-2.0-preview",
197197
messages=[{"role": "user", "content": "What's the weather in Paris and calculate tip for $50 bill?"}],
198-
tools=get_tools_format() # Automatically includes all @tools functions
198+
tools=get_tools() # Automatically includes all @tools functions
199199
)
200200

201201
print(response.choices[0].message.content)
@@ -286,13 +286,13 @@ legacy_tools = [{
286286
# Combine with @tools functions
287287
combined_tools = merge_tool_lists(
288288
legacy_tools, # Existing tools
289-
get_tools_format(), # @tools functions
289+
get_tools(), # @tools functions
290290
"math" # Category name (if you have categorized tools)
291291
)
292292

293293
# Use in chat completion
294294
response = hai.chat.completions.create(
295-
model="Helpingai3-raw",
295+
model="Dhanishtha-2.0-preview",
296296
messages=[{"role": "user", "content": "Help me with weather, calculations, and web search"}],
297297
tools=combined_tools
298298
)
@@ -396,32 +396,10 @@ Comprehensive documentation is available:
396396

397397
- [📖 Getting Started Guide](docs/getting_started.md) - Installation and basic usage
398398
- [🔧 API Reference](docs/api_reference.md) - Complete API documentation
399+
- [🛠️ Tool Calling Guide](docs/tool_calling.md) - Creating and using AI-callable tools
399400
- [💡 Examples](docs/examples.md) - Code examples and use cases
400401
- [❓ FAQ](docs/faq.md) - Frequently asked questions
401402

402-
## 🏗️ Project Structure
403-
404-
```
405-
HelpingAI-python/
406-
├── HelpingAI/ # Main package
407-
│ ├── __init__.py # Package initialization
408-
│ ├── client.py # Main HAI client
409-
│ ├── models.py # Model management
410-
│ ├── base_models.py # Data models
411-
│ ├── error.py # Exception classes
412-
│ ├── version.py # Version information
413-
│ └── tools/ # Tool calling utilities
414-
│ ├── __init__.py # Tools module exports
415-
│ ├── core.py # @tools decorator and Fn class
416-
│ ├── schema.py # Automatic schema generation
417-
│ ├── registry.py # Tool registry management
418-
│ ├── compatibility.py # Format conversion utilities
419-
│ └── errors.py # Tool-specific exceptions
420-
├── docs/ # Documentation
421-
├── tests/ # Test suite
422-
├── setup.py # Package configuration
423-
└── README.md # This file
424-
```
425403

426404
## 🔧 Requirements
427405

@@ -449,26 +427,8 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
449427
- **Issues**: [GitHub Issues](https://github.com/HelpingAI/HelpingAI-python/issues)
450428
- **Documentation**: [HelpingAI Docs](https://helpingai.co/docs)
451429
- **Dashboard**: [HelpingAI Dashboard](https://helpingai.co/dashboard)
452-
- **Email**: varun@helpingai.co
453-
454-
## 🚀 What's New in v1.1.0
455-
456-
- **🔧 Tool Calling Framework**: New [`@tools decorator`](HelpingAI/tools/core.py:144) for effortless tool creation
457-
- **🤖 Automatic Schema Generation**: Type hint-based JSON schema creation with docstring parsing
458-
- **🔄 Universal Compatibility**: Seamless integration with existing OpenAI-format tools
459-
- **📝 Smart Documentation**: Multi-format docstring parsing (Google, Sphinx, NumPy styles)
460-
- **🛡️ Enhanced Tool Error Handling**: Comprehensive exception types for tool operations
461-
- **Extended Python Support**: Now supports Python 3.7-3.14
462-
- **Updated Models**: Support for latest models (Helpingai3-raw, Dhanishtha-2.0-preview)
463-
- **Dhanishtha-2.0 Integration**: World's first intermediate thinking model with multi-phase reasoning
464-
- **HelpingAI3 Support**: Enhanced emotional intelligence with advanced contextual awareness
465-
- **Improved Model Management**: Better fallback handling and detailed model descriptions
466-
- **OpenAI-Compatible Interface**: Familiar API design
467-
- **Enhanced Error Handling**: Comprehensive exception types
468-
- **Streaming Support**: Real-time response streaming
469-
- **Advanced Filtering**: Hide reasoning blocks with `hide_think` parameter
430+
- **Email**: Team@helpingai.co
470431

471-
---
472432

473433
**Built with ❤️ by the HelpingAI Team**
474434

docs/api_reference.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ def create(
9494

9595
| Parameter | Type | Description |
9696
|-----------|------|-------------|
97-
| `model` | `str` | Model ID to use (e.g., "Helpingai3-raw", "Dhanishtha-2.0-preview") |
97+
| `model` | `str` | Model ID to use (e.g., "Dhanishtha-2.0-preview", "Dhanishtha-2.0-preview-mini") |
9898
| `messages` | `List[Dict[str, str]]` | List of message objects with "role" and "content" |
9999

100100
**Optional Parameters:**
@@ -128,7 +128,7 @@ def create(
128128
```python
129129
# Basic completion
130130
response = hai.chat.completions.create(
131-
model="Helpingai3-raw",
131+
model="Dhanishtha-2.0-preview",
132132
messages=[
133133
{"role": "user", "content": "Hello!"}
134134
]
@@ -151,7 +151,7 @@ response = hai.chat.completions.create(
151151

152152
# Streaming completion
153153
for chunk in hai.chat.completions.create(
154-
model="Helpingai3-raw",
154+
model="Dhanishtha-2.0-preview",
155155
messages=[{"role": "user", "content": "Tell me a story"}],
156156
stream=True
157157
):
@@ -198,7 +198,7 @@ def retrieve(model_id: str) -> Model
198198
**Example:**
199199

200200
```python
201-
model = hai.models.retrieve("Helpingai3-raw")
201+
model = hai.models.retrieve("Dhanishtha-2.0-preview")
202202
print(f"Model: {model.name}")
203203
```
204204

@@ -461,7 +461,7 @@ import time
461461
def handle_completion_errors():
462462
try:
463463
response = hai.chat.completions.create(
464-
model="Helpingai3-raw",
464+
model="Dhanishtha-2.0-preview",
465465
messages=[{"role": "user", "content": "Hello"}]
466466
)
467467
return response
@@ -611,7 +611,7 @@ except Exception as e:
611611
def stream_completion(prompt: str):
612612
try:
613613
stream = hai.chat.completions.create(
614-
model="Helpingai3-raw",
614+
model="Dhanishtha-2.0-preview",
615615
messages=[{"role": "user", "content": prompt}],
616616
stream=True,
617617
hide_think=True

0 commit comments

Comments
 (0)