Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions src/agentlab/llm/llm_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,8 +188,8 @@ def get_tokenizer(model_name="gpt-4"):
logging.info(f"Could not find a tokenizer for model {model_name}. Trying HuggingFace.")
try:
return AutoTokenizer.from_pretrained(model_name)
except OSError:
logging.info(f"Could not find a tokenizer for model {model_name}. Defaulting to gpt-4.")
except Exception as e:
logging.info(f"Could not find a tokenizer for model {model_name}: {e} Defaulting to gpt-4.")
Comment on lines 189 to +192
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Over-broad Exception Handling category Error Handling

Tell me more
What is the issue?

Using a bare Exception catch is too broad and could mask critical errors that should be handled differently.

Why this matters

This could catch and ignore serious issues like memory errors or import errors that require different handling, potentially making debugging more difficult.

Suggested change ∙ Feature Preview

Catch specific exceptions that are expected in tokenizer loading:

try:
    return AutoTokenizer.from_pretrained(model_name)
except (OSError, ValueError) as e:
    logging.info(f"Could not find a tokenizer for model {model_name}: {e} Defaulting to gpt-4.")
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

return tiktoken.encoding_for_model("gpt-4")


Expand Down
Loading