diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index f474cb3a..fc0dac2e 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -64,7 +64,7 @@ If the latest [documentation versions report](https://github.com/freeCodeCamp/de
Follow the following steps to update documentations to their latest version:
1. Make version/release changes in the scraper file.
-2. Check if the license is still correct. If you update `options[:attribution]`, also update the documentation's entry in the array in [`assets/javascripts/templates/pages/about_tmpl.js`](../assets/javascripts/templates/pages/about_tmpl.js) to match.
+2. Check if the license is still correct. Update `options[:attribution]` if needed.
3. If the documentation has a custom icon, ensure the icons in public/icons/*your_scraper_name*/
are up-to-date. If you pull the updated icon from a place different than the one specified in the `SOURCE` file, make sure to replace the old link with the new one.
4. If `self.links` is defined, check if the urls are still correct.
5. If the scraper inherits from `FileScraper` rather than `URLScraper`, follow the instructions for that scraper in [`file-scrapers.md`](../docs/file-scrapers.md) to obtain the source material for the scraper.
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index c93bfc45..029babb2 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -22,7 +22,7 @@ If you’re adding a new scraper, please ensure that you have:
If you're updating existing documentation to its latest version, please ensure that you have:
- [ ] Updated the versions and releases in the scraper file
-- [ ] Ensured the license is up-to-date and that the documentation's entry in the array in `about_tmpl.coffee` matches its data in `self.attribution`
+- [ ] Ensured the license is up-to-date
- [ ] Ensured the icons and the `SOURCE` file in public/icons/*your_scraper_name*/
are up-to-date if the documentation has a custom icon
- [ ] Ensured `self.links` contains up-to-date urls if `self.links` is defined
- [ ] Tested the changes locally to ensure:
diff --git a/docs/adding-docs.md b/docs/adding-docs.md
index cf543fcc..971b91be 100644
--- a/docs/adding-docs.md
+++ b/docs/adding-docs.md
@@ -16,7 +16,7 @@ Adding a documentation may look like a daunting task but once you get the hang o
9. To customize the pages' styling, create an SCSS file in the `assets/stylesheets/pages/` directory and import it in both `application.css.scss` AND `application-dark.css.scss`. Both the file and CSS class should be named `_[type]` where [type] is equal to the scraper's `type` attribute (documentations with the same type share the same custom CSS and JS). Setting the type to `simple` will apply the general styling rules in `assets/stylesheets/pages/_simple.scss`, which can be used for documentations where little to no CSS changes are needed.
10. To add syntax highlighting or execute custom JavaScript on the pages, create a file in the `assets/javascripts/views/pages/` directory (take a look at the other files to see how it works).
11. Add the documentation's icon in the `public/icons/docs/[my_doc]/` directory, in both 16x16 and 32x32-pixels formats. The icon spritesheet is automatically generated when you (re)start your local DevDocs instance.
-12. Add the documentation's copyright details to the list in `assets/javascripts/templates/pages/about_tmpl.coffee`. This is the data shown in the table on the [about](https://devdocs.io/about) page, and is ordered alphabetically. Simply copying an existing item, placing it in the right slot and updating the values to match the new scraper will do the job.
+12. Add the documentation's copyright details to `options[:attribution]`. This is the data shown in the table on the [about](https://devdocs.io/about) page, and is ordered alphabetically. Please see an existing scraper for the typesetting.
13. Ensure `thor updates:check [my_doc]` shows the correct latest version.
If the documentation includes more than a few hundreds pages and is available for download, try to scrape it locally (e.g. using `FileScraper`). It'll make the development process much faster and avoids putting too much load on the source site. (It's not a problem if your scraper is coupled to your local setup, just explain how it works in your pull request.)