Skip to content

Commit 0948ea3

Browse files
authored
Merge pull request #806 from luotao1/link
fix some dead links in doc/
2 parents 0fd44c6 + 3d81703 commit 0948ea3

File tree

12 files changed

+24
-222
lines changed

12 files changed

+24
-222
lines changed

doc/api/data_provider/pydataprovider2_en.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
.. _api_pydataprovider:
1+
.. _api_pydataprovider2_en:
22

33
PyDataProvider2
44
===============
@@ -104,6 +104,8 @@ And PaddlePadle will do all of the rest things\:
104104

105105
Is this cool?
106106

107+
.. _api_pydataprovider2_en_sequential_model:
108+
107109
DataProvider for the sequential model
108110
-------------------------------------
109111
A sequence model takes sequences as its input. A sequence is made up of several

doc/api/predict/swig_py_paddle_en.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ python's :code:`help()` function. Let's walk through the above python script:
2323

2424
* At the beginning, use :code:`swig_paddle.initPaddle()` to initialize
2525
PaddlePaddle with command line arguments, for more about command line arguments
26-
see `Command Line Arguments <../cmd_argument/detail_introduction.html>`_.
26+
see :ref:`cmd_detail_introduction_en` .
2727
* Parse the configuration file that is used in training with :code:`parse_config()`.
2828
Because data to predict with always have no label, and output of prediction work
2929
normally is the output layer rather than the cost layer, so you should modify
@@ -36,7 +36,7 @@ python's :code:`help()` function. Let's walk through the above python script:
3636
- Note: As swig_paddle can only accept C++ matrices, we offer a utility
3737
class DataProviderConverter that can accept the same input data with
3838
PyDataProvider2, for more information please refer to document
39-
of `PyDataProvider2 <../data_provider/pydataprovider2.html>`_.
39+
of :ref:`api_pydataprovider2_en` .
4040
* Do the prediction with :code:`forwardTest()`, which takes the converted
4141
input data and outputs the activations of the output layer.
4242

doc/api/trainer_config_helpers/layers.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
.. _api_trainer_config_helpers_layers:
2+
13
======
24
Layers
35
======

doc/getstarted/basic_usage/index_en.rst

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -99,11 +99,3 @@ In PaddlePaddle, training is just to get a collection of model parameters, which
9999
Although starts from a random guess, you can see that value of ``w`` changes quickly towards 2 and ``b`` changes quickly towards 0.3. In the end, the predicted line is almost identical with real answer.
100100

101101
There, you have recovered the underlying pattern between ``X`` and ``Y`` only from observed data.
102-
103-
104-
5. Where to Go from Here
105-
-------------------------
106-
107-
- `Install and Build <../build_and_install/index.html>`_
108-
- `Tutorials <../demo/quick_start/index_en.html>`_
109-
- `Example and Demo <../demo/index.html>`_

doc/howto/cmd_parameter/detail_introduction_en.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
```eval_rst
2+
.. _cmd_detail_introduction_en:
3+
```
4+
15
# Detail Description
26

37
## Common

doc/howto/deep_model/rnn/rnn_en.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Then at the :code:`process` function, each :code:`yield` function will return th
3030
yield src_ids, trg_ids, trg_ids_next
3131
3232
33-
For more details description of how to write a data provider, please refer to `PyDataProvider2 <../../ui/data_provider/index.html>`_. The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`.
33+
For more details description of how to write a data provider, please refer to :ref:`api_pydataprovider2_en` . The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`.
3434

3535
===============================================
3636
Configure Recurrent Neural Network Architecture
@@ -106,7 +106,7 @@ We will use the sequence to sequence model with attention as an example to demon
106106

107107
In this model, the source sequence :math:`S = \{s_1, \dots, s_T\}` is encoded with a bidirectional gated recurrent neural networks. The hidden states of the bidirectional gated recurrent neural network :math:`H_S = \{H_1, \dots, H_T\}` is called *encoder vector* The decoder is a gated recurrent neural network. When decoding each token :math:`y_t`, the gated recurrent neural network generates a set of weights :math:`W_S^t = \{W_1^t, \dots, W_T^t\}`, which are used to compute a weighted sum of the encoder vector. The weighted sum of the encoder vector is utilized to condition the generation of the token :math:`y_t`.
108108

109-
The encoder part of the model is listed below. It calls :code:`grumemory` to represent gated recurrent neural network. It is the recommended way of using recurrent neural network if the network architecture is simple, because it is faster than :code:`recurrent_group`. We have implemented most of the commonly used recurrent neural network architectures, you can refer to `Layers <../../ui/api/trainer_config_helpers/layers_index.html>`_ for more details.
109+
The encoder part of the model is listed below. It calls :code:`grumemory` to represent gated recurrent neural network. It is the recommended way of using recurrent neural network if the network architecture is simple, because it is faster than :code:`recurrent_group`. We have implemented most of the commonly used recurrent neural network architectures, you can refer to :ref:`api_trainer_config_helpers_layers` for more details.
110110

111111
We also project the encoder vector to :code:`decoder_size` dimensional space, get the first instance of the backward recurrent network, and project it to :code:`decoder_size` dimensional space:
112112

@@ -246,6 +246,6 @@ The code is listed below:
246246
outputs(beam_gen)
247247
248248
249-
Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to `Semantic Role Labeling Demo <../../demo/semantic_role_labeling/index.html>`_ for more details.
249+
Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to :ref:`semantic_role_labeling_en` for more details.
250250

251251
The full configuration file is located at :code:`demo/seqToseq/seqToseq_net.py`.

doc/howto/optimization/gpu_profiling_en.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ In this tutorial, we will focus on nvprof and nvvp.
5151
:code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate
5252
above profilers.
5353

54-
.. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp
54+
.. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp
5555
:language: c++
5656
:lines: 111-124
5757
:linenos:
@@ -77,7 +77,7 @@ As a simple example, consider the following:
7777

7878
1. Add :code:`REGISTER_TIMER_INFO` and :code:`printAllStatus` functions (see the emphasize-lines).
7979

80-
.. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp
80+
.. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp
8181
:language: c++
8282
:lines: 111-124
8383
:emphasize-lines: 8-10,13
@@ -124,7 +124,7 @@ To use this command line profiler **nvprof**, you can simply issue the following
124124
125125
1. Add :code:`REGISTER_GPU_PROFILER` function (see the emphasize-lines).
126126
127-
.. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp
127+
.. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp
128128
:language: c++
129129
:lines: 111-124
130130
:emphasize-lines: 6-7

doc/tutorials/embedding_model/index_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ where `train.sh` is almost the same as `demo/seqToseq/translation/train.sh`, the
9393
- `--init_model_path`: path of the initialization model, here is `data/paraphrase_model`
9494
- `--load_missing_parameter_strategy`: operations when model file is missing, here use a normal distibution to initialize the other parameters except for the embedding layer
9595

96-
For users who want to understand the dataset format, model architecture and training procedure in detail, please refer to [Text generation Tutorial](../text_generation/text_generation.md).
96+
For users who want to understand the dataset format, model architecture and training procedure in detail, please refer to [Text generation Tutorial](../text_generation/index_en.md).
9797

9898
## Optional Function ##
9999
### Embedding Parameters Observation

doc/tutorials/rec/ml_regression_en.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -264,7 +264,7 @@ In this :code:`dataprovider.py`, we should set\:
264264
* use_seq\: Whether this :code:`dataprovider.py` in sequence mode or not.
265265
* process\: Return each sample of data to :code:`paddle`.
266266

267-
The data provider details document see :ref:`api_pydataprovider`.
267+
The data provider details document see :ref:`api_pydataprovider2_en`.
268268

269269
Train
270270
`````

doc/tutorials/semantic_role_labeling/index_en.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
```eval_rst
2+
.. _semantic_role_labeling_en:
3+
```
4+
15
# Semantic Role labeling Tutorial #
26

37
Semantic role labeling (SRL) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence. SRL is useful as an intermediate step in a wide range of natural language processing tasks, such as information extraction. automatic document categorization and question answering. An instance is as following [1]:

0 commit comments

Comments
 (0)