Skip to content

Commit cf5b1a1

Browse files
committed
custom execution
1 parent cdbb3fd commit cf5b1a1

File tree

2 files changed

+6
-167
lines changed

2 files changed

+6
-167
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,18 @@ You can install `AsynchronousIterativeAlgorithms` by typing
2323
julia> ] add AsynchronousIterativeAlgorithms
2424
```
2525

26-
## Quick start
26+
## [Quick start](@id quick_start)
2727

2828
Say you want to implement a distributed version of *Stochastic Gradient Descent* (SGD). You'll need to define:
2929

3030
- an **algorithm structure** subtyping `AbstractAlgorithm{Q,A}`
31-
- the **initialisation step** where you compute the first iteration
31+
- the **initialization step** where you compute the first iteration
3232
- the **worker step** performed by the workers when they receive a query `q::Q` from the central node
3333
- the asynchronous **central step** performed by the central node when it receives an answer `a::A` from a `worker`
3434

35-
![Sequence Diagram](./docs/src/assets/sequence_diagram.png "Sequence Diagram")
35+
![Sequence Diagram](docs/assets/sequence_diagram.png)
3636

37-
Let's first of all set up our distributed environement.
37+
Let's first of all set up our distributed environment.
3838

3939
```julia
4040
# Launch multiple processes (or remote machines)
@@ -76,7 +76,7 @@ Now to the implementation.
7676
end
7777
```
7878

79-
Let's test our algorithm on a linear regression problem with mean squared error loss (LRMSE). This problem must be **compatible with your algorithm**. In this example, it means providing attributes `n` and `m` (dimension of the regressor and number of points), and the method `∇f(x::Vector{Float64}, i::Int64)` (gradient of the linear regression loss on the ith data point)
79+
Now let's test our algorithm on a linear regression problem with mean squared error loss (LRMSE). This problem must be **compatible with your algorithm**. In this example, it means providing attributes `n` and `m` (dimension of the regressor and number of points), and the method `∇f(x::Vector{Float64}, i::Int64)` (gradient of the linear regression loss on the ith data point)
8080

8181
```julia
8282
@everywhere begin
@@ -103,7 +103,7 @@ We're almost ready to start the algorithm...
103103

104104
```julia
105105
# Provide the stopping criteria
106-
stopat = (1000,0,0.) # (iterations, epochs, time)
106+
stopat = (iteration=1000, time=42.)
107107

108108
# Instanciate your algorithm
109109
sgd = SGD(0.01)

src/algorithm_templates.jl

Lines changed: 0 additions & 161 deletions
This file was deleted.

0 commit comments

Comments
 (0)