Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions install_fast_downward.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
#!/bin/bash

# Install Fast Downward for predicators
echo "Installing Fast Downward..."

# Create external directory if it doesn't exist
mkdir -p external

# Clone Fast Downward if not already present
if [ ! -d "external/downward" ]; then
echo "Cloning Fast Downward repository..."
git clone https://github.com/aibasel/downward.git external/downward
fi

# Build Fast Downward
echo "Building Fast Downward..."
cd external/downward
python build.py

# Get the absolute path
FD_PATH=$(pwd)/fast-downward.py

# Go back to predicators root
cd ../..

# Create environment setup script
cat > setup_fd_env.sh << EOF
#!/bin/bash
export FD_EXEC_PATH=$FD_PATH
echo "Fast Downward path set to: $FD_PATH"
EOF

chmod +x setup_fd_env.sh

echo "Fast Downward installed successfully!"
echo "To set the environment variable for your current session, run:"
echo " source ./setup_fd_env.sh"
echo ""
echo "To make it permanent, add this line to your shell profile (.bashrc, .zshrc, etc.):"
echo " export FD_EXEC_PATH=$FD_PATH"
155 changes: 155 additions & 0 deletions instruction_python3.13.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
# Predicators Installation Instructions for Python 3.13

This guide provides step-by-step instructions for installing the `predicators` package on Python 3.13, addressing compatibility issues with the original setup.py dependencies.

## Prerequisites

- Python 3.13.x
- A virtual environment (recommended)
- Git (for Git-based dependencies)

## Installation Steps

### 1. Set up and activate your virtual environment

```bash
# If you don't have a virtual environment yet:
python3 -m venv .venv

# Activate the virtual environment
source .venv/bin/activate
```

### 2. Install build dependencies

The original setup.py has strict version pins that aren't compatible with Python 3.13. First, install the required build tools:

```bash
pip install --upgrade setuptools wheel
```

### 3. Install predicators without dependencies

This avoids the dependency resolution conflicts:

```bash
pip install --no-deps -e .
```

### 4. Install compatible dependencies

Use the Python 3.13 compatible requirements file:

```bash
pip install -r requirements-python3.13.txt
```

### 5. Install Git-based dependencies

Install the remaining dependencies from Git repositories:

```bash
pip install "git+https://github.com/sebdumancic/structure_mapping.git" "git+https://github.com/tomsilver/pg3.git" "git+https://github.com/Learning-and-Intelligent-Systems/gym-sokoban.git"
```

### 6. Set up environment variables

Set the required environment variable:

```bash
export PYTHONHASHSEED=0
```

To make it permanent, add it to your shell profile:

```bash
# For bash
echo "export PYTHONHASHSEED=0" >> ~/.bashrc

# For zsh
echo "export PYTHONHASHSEED=0" >> ~/.zshrc
```

### 7. Install Fast Downward

```bash
bash install_fast_downward.sh
export FD_EXEC_PATH=$(pwd)/external/downward/fast-downward.py
```

Add to shell profile for persistence:
```bash
echo "export FD_EXEC_PATH=$(pwd)/external/downward/fast-downward.py" >> ~/.zshrc
```

## Running Predicators

After installation, you can run predicators with the environment variables set:

```bash
# Set environment variable (required)
export PYTHONHASHSEED=0

# Example command
python predicators/main.py --env burger --approach vlm_open_loop --seed 0 --num_train_tasks 1 --num_test_tasks 1 --bilevel_plan_without_sim True --make_failure_videos --sesame_task_planner fdopt --debug --vlm_model_name gemini-1.5-pro-latest --vlm_open_loop_use_training_demos True
```

## Verification

To verify the installation worked:

```bash
source .venv/bin/activate
export PYTHONHASHSEED=0
python predicators/main.py --help
```

You should see the help message without any import errors.

## Known Issues and Warnings

1. **PyBullet not available**: PyBullet has compilation issues with Python 3.13. All PyBullet-dependent environments (those with names starting with `pybullet_`) will be automatically skipped. This is expected and allows you to use non-PyBullet environments.

2. **Gym deprecation warning**: You may see warnings about gym being unmaintained. This is expected and doesn't affect functionality.

3. **Package version conflicts**: The dependency resolver may show warnings about version conflicts between the strict pins in setup.py and the installed versions. These are expected and generally don't cause issues.

4. **pkg_resources deprecation**: You may see warnings about pkg_resources being deprecated. This comes from some dependencies and is not critical.

## Troubleshooting

### If you encounter "ModuleNotFoundError"

Make sure:
1. Your virtual environment is activated
2. You've installed all dependencies as listed above
3. The predicators package was installed with `pip install --no-deps -e .`

### If you encounter build errors

Ensure you have the latest setuptools and wheel:
```bash
pip install --upgrade setuptools wheel pip
```

### If specific dependencies fail to install

Some dependencies may need system-level packages. On macOS with Homebrew:
```bash
# For opencv-python issues
brew install opencv

# For other compilation issues
xcode-select --install
```

## Key Differences from Original setup.py

The main changes made for Python 3.13 compatibility:

- **numpy**: Updated from `==1.23.5` to `>=1.24.0` (numpy 1.23.5 doesn't support Python 3.13)
- **torch/torchvision**: Updated to compatible versions
- **Other packages**: Used more flexible version constraints instead of strict pins
- **Missing dependencies**: Added `psutil` which was required but not listed

This approach maintains compatibility while working around the Python 3.13 restrictions in the original dependency specifications.
10 changes: 5 additions & 5 deletions predicators/approaches/active_sampler_learning_approach.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
"""An approach that performs active sampler learning.

The current implementation assumes for convenience that NSRTs and options are
1:1 and share the same parameters (like a PDDL environment). It is
straightforward conceptually to remove this assumption, because the approach
uses its own NSRTs to select options, but it is difficult implementation-wise,
so we're punting for now.
The current implementation assumes for convenience that NSRTs and
options are 1:1 and share the same parameters (like a PDDL environment).
It is straightforward conceptually to remove this assumption, because
the approach uses its own NSRTs to select options, but it is difficult
implementation-wise, so we're punting for now.

See scripts/configs/active_sampler_learning.yaml for examples.
"""
Expand Down
2 changes: 1 addition & 1 deletion predicators/approaches/bridge_policy_approach.py
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ def _Can_plan(self, state: State, _: Sequence[Object]) -> bool:

def call_planner_policy(self, state: State, _: Dict, __: Sequence[Object],
___: Array) -> Action:
"""policy for CallPlanner option."""
"""Policy for CallPlanner option."""
self._current_control = "planner"
# create a new task where the init state is our current state
current_task = Task(state, self._train_tasks[0].goal)
Expand Down
7 changes: 4 additions & 3 deletions predicators/approaches/llm_option_renaming_approach.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,9 @@ def _renaming_suffixes(self) -> List[str]:

def _create_replacements(self) -> Dict[str, str]:
return {
o.name: utils.generate_random_string(len(o.name),
list(string.ascii_lowercase),
self._rng)
o.name:
utils.generate_random_string(len(o.name),
list(string.ascii_lowercase),
self._rng)
for o in self._initial_options
}
7 changes: 4 additions & 3 deletions predicators/approaches/llm_predicate_renaming_approach.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,9 @@ def _renaming_suffixes(self) -> List[str]:

def _create_replacements(self) -> Dict[str, str]:
return {
p.name: utils.generate_random_string(len(p.name),
list(string.ascii_lowercase),
self._rng)
p.name:
utils.generate_random_string(len(p.name),
list(string.ascii_lowercase),
self._rng)
for p in self._get_current_predicates()
}
3 changes: 2 additions & 1 deletion predicators/approaches/maple_q_approach.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,8 @@ def _learn_nsrts(self, trajectories: List[LowLevelTrajectory],
for nsrt in self._nsrts:
all_objects = {
o
for t in self._train_tasks for o in t.init
for t in self._train_tasks
for o in t.init
}
all_ground_nsrts.update(
utils.all_ground_nsrts(nsrt, all_objects))
Expand Down
4 changes: 2 additions & 2 deletions predicators/approaches/nsrt_rl_approach.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,8 +139,8 @@ def _get_experience_from_result(
next_state, cur_option.objects)
had_sufficient_steps = (
next_state.allclose(traj.states[-1])
and (CFG.max_num_steps_interaction_request - j >
CFG.nsrt_rl_valid_reward_steps_threshold))
and (CFG.max_num_steps_interaction_request - j
> CFG.nsrt_rl_valid_reward_steps_threshold))
if terminate:
option_to_data[parent_option].append(experience)
cur_option_idx += 1
Expand Down
16 changes: 9 additions & 7 deletions predicators/approaches/online_nsrt_learning_approach.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,11 +136,12 @@ def _create_explorer(self) -> BaseExplorer:
def _score_atoms_novelty(self, atoms: Set[GroundAtom]) -> float:
"""Score the novelty of a ground atom set, with higher better.

Score based on the number of times that this atom set has been seen in
the data, with object identities ignored (i.e., this is lifted).
Score based on the number of times that this atom set has been
seen in the data, with object identities ignored (i.e., this is
lifted).

Assumes that the size of the atom set is between CFG.glib_min_goal_size
and CFG.glib_max_goal_size (inclusive).
Assumes that the size of the atom set is between
CFG.glib_min_goal_size and CFG.glib_max_goal_size (inclusive).
"""
assert CFG.glib_min_goal_size <= len(atoms) <= CFG.glib_max_goal_size
canonical_atoms = self._get_canonical_lifted_atoms(atoms)
Expand All @@ -160,9 +161,10 @@ def _get_canonical_lifted_atoms(

This is a helper for novelty scoring for GLIB.

This is an efficient approximation of what we really care about, which
is whether two atom sets unify. It's an approximation because there are
tricky cases where the sorting procedure is ambiguous.
This is an efficient approximation of what we really care about,
which is whether two atom sets unify. It's an approximation
because there are tricky cases where the sorting procedure is
ambiguous.
"""
# Create a "signature" for each object, which will be used to break
# ties when sorting based on predicates alone is not enough.
Expand Down
3 changes: 2 additions & 1 deletion predicators/approaches/pp_param_learning_approach.py
Original file line number Diff line number Diff line change
Expand Up @@ -380,7 +380,8 @@ def elbo_torch(
use_sparse_trajectory: bool = True,
debug_log: bool = False,
) -> Tuple[Tensor, Tensor, Tensor, Tensor]:
"""*Differentiable* ELBO computation with efficient, cached condition checks."""
"""*Differentiable* ELBO computation with efficient, cached condition
checks."""
trajectory = atom_option_trajectory
num_time_steps = len(trajectory.states)

Expand Down
3 changes: 2 additions & 1 deletion predicators/approaches/sme_pg3_analogy_approach.py
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,8 @@ def _create_name_to_instances(env: BaseEnv,
nsrt_name_to_nsrt = {n.name: n for n in nsrts}
var_name_to_nsrt_variables = {
_variable_to_s_exp(v, n.name): (n, v)
for n in nsrts for v in n.parameters
for n in nsrts
for v in n.parameters
}
names_to_instances: Dict[str, Dict[str, Any]] = {
"predicates": pred_name_to_pred,
Expand Down
Loading
Loading