Merge master

pull/870/head
Jasper van Merle 6 years ago
commit f80d07412c

@ -6,3 +6,6 @@ indent_style = space
indent_size = 2
insert_final_newline = true
trim_trailing_whitespace = true
[*.md]
trim_trailing_whitespace = false

@ -24,20 +24,18 @@ Want to contribute? Great. Please review the following guidelines carefully and
## Requesting new features
1. Search for similar feature requests; someone may have already requested it.
2. Make sure your feature fits DevDocs's [vision](https://github.com/freeCodeCamp/devdocs/blob/master/README.md#vision).
2. Make sure your feature fits DevDocs's [vision](../README.md#vision).
3. Provide a clear and detailed explanation of the feature and why it's important to add it.
For general feedback and ideas, please use the [mailing list](https://groups.google.com/d/forum/devdocs).
## Requesting new documentations
Please don't open issues to request new documentations.
Please don't open issues to request new documentations.
Use the [Trello board](https://trello.com/b/6BmTulfx/devdocs-documentation) where everyone can vote.
## Contributing code and features
1. Search for existing issues; someone may already be working on a similar feature.
2. Before embarking on any significant pull request, please open an issue describing the changes you intend to make. Otherwise you risk spending a lot of time working on something I may not want to merge. This also tells other contributors that you're working on the feature.
2. Before embarking on any significant pull request, please open an issue describing the changes you intend to make. Otherwise you risk spending a lot of time working on something we may not want to merge. This also tells other contributors that you're working on the feature.
3. Follow the [coding conventions](#coding-conventions).
4. If you're modifying the Ruby code, include tests and ensure they pass.
5. Try to keep your pull request small and simple.
@ -46,7 +44,7 @@ Use the [Trello board](https://trello.com/b/6BmTulfx/devdocs-documentation) wher
## Contributing new documentations
See the [wiki](https://github.com/freeCodeCamp/devdocs/wiki) to learn how to add new documentations.
See the [`docs` folder](https://github.com/freeCodeCamp/devdocs/tree/master/docs) to learn how to add new documentations.
**Important:** the documentation's license must permit alteration, redistribution and commercial use, and the documented software must be released under an open source license. Feel free to get in touch if you're not sure if a documentation meets those requirements.
@ -65,13 +63,6 @@ Please don't submit a pull request updating the version number of a documentatio
To ask that an existing documentation be updated, please use the [Trello board](https://trello.com/c/2B0hmW7M/52-request-updates-here).
## Other contributions
Besides new docs and features, here are other ways you can contribute:
* **Improve our copy.** English isn't my first language so if you notice grammatical or usage errors, feel free to submit a pull request — it'll be much appreciated.
* **Participate in the issue tracker.** Your opinion matters — feel free to add comments to existing issues. You're also welcome to participate to the [mailing list](https://groups.google.com/d/forum/devdocs).
## Coding conventions
* two spaces; no tabs
@ -80,4 +71,4 @@ Besides new docs and features, here are other ways you can contribute:
## Questions?
If you have any questions, please feel free to ask on the [mailing list](https://groups.google.com/d/forum/devdocs).
If you have any questions, please feel free to ask them on the contributor chat room on [Gitter](https://gitter.im/FreeCodeCamp/DevDocs).

@ -1,6 +1,6 @@
<!--
Please read the contributing guidelines before opening an issue:
https://github.com/freeCodeCamp/devdocs/blob/master/CONTRIBUTING.md
https://github.com/freeCodeCamp/devdocs/blob/master/.github/CONTRIBUTING.md
To request a new documentation, or an update of an existing documentation, go here:
https://trello.com/b/6BmTulfx/devdocs-documentation

@ -0,0 +1,8 @@
daysUntilClose: 30
responseRequiredLabel: needs-info
closeComment: >
This issue has been automatically closed because there has been no response
to our request for more information from the original author. With only the
information thats currently in the issue, we dont have enough information
to take action. Please comment if you have or find the answer we need so we
can investigate further.

7
.gitignore vendored

@ -1,11 +1,8 @@
.DS_Store
.bundle
*.pxm
*.sketch
tmp
public/assets
public/fonts
public/docs/**/*
!public/docs/docs.json
!public/docs/**/index.json
log/
docs/**/*
!docs/*.md

@ -1 +1 @@
2.5.1
2.6.0

@ -0,0 +1,2 @@
> Our Code of Conduct is available here: <https://code-of-conduct.freecodecamp.org/>

@ -1,13 +1,13 @@
Copyright 2013-2018 Thibaut Courouble and other contributors
Copyright 2013-2019 Thibaut Courouble and other contributors
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
Please do not use the name DevDocs to endorse or promote products
derived from this software without my permission, except as may be
necessary to comply with the notice/attribution requirements.
derived from this software without the maintainers' permission, except
as may be necessary to comply with the notice/attribution requirements.
I also wish that any documentation file generated using this software
We also wish that any documentation file generated using this software
be attributed to DevDocs. Let's be fair to all contributors by giving
credit where credit's due. Thanks.

@ -1,4 +1,4 @@
FROM ruby:2.5.1
FROM ruby:2.6.0
ENV LANG=C.UTF-8

@ -1,4 +1,4 @@
FROM ruby:2.5.1-alpine
FROM ruby:2.6.0-alpine
ENV LANG=C.UTF-8

@ -1,11 +1,14 @@
source 'https://rubygems.org'
ruby '2.5.1'
ruby '2.6.0'
gem 'rake'
gem 'thor'
gem 'pry', '~> 0.11.0'
gem 'pry', '~> 0.12.0'
gem 'activesupport', '~> 5.2', require: false
gem 'yajl-ruby', require: false
gem 'html-pipeline'
gem 'typhoeus'
gem 'nokogiri'
group :app do
gem 'rack'
@ -33,14 +36,12 @@ group :development do
end
group :docs do
gem 'typhoeus'
gem 'nokogiri'
gem 'html-pipeline'
gem 'image_optim'
gem 'image_optim_pack', platforms: :ruby
gem 'progress_bar', require: false
gem 'unix_utils', require: false
gem 'tty-pager', require: false
gem 'net-sftp', '>= 2.1.3.rc2', require: false
end
group :test do

@ -1,7 +1,7 @@
GEM
remote: https://rubygems.org/
specs:
activesupport (5.2.1)
activesupport (5.2.2)
concurrent-ruby (~> 1.0, >= 1.0.2)
i18n (>= 0.7, < 2)
minitest (~> 5.1)
@ -18,21 +18,21 @@ GEM
coffee-script-source
execjs
coffee-script-source (1.12.2)
concurrent-ruby (1.0.5)
daemons (1.2.6)
erubi (1.7.1)
ethon (0.11.0)
concurrent-ruby (1.1.4)
daemons (1.3.1)
erubi (1.8.0)
ethon (0.12.0)
ffi (>= 1.3.0)
eventmachine (1.2.7)
execjs (2.7.0)
exifr (1.3.4)
ffi (1.9.25)
exifr (1.3.5)
ffi (1.10.0)
fspath (3.1.0)
highline (2.0.0)
html-pipeline (2.8.4)
html-pipeline (2.10.0)
activesupport (>= 2)
nokogiri (>= 1.4)
i18n (1.1.1)
i18n (1.5.2)
concurrent-ruby (~> 1.0)
image_optim (0.26.3)
exifr (~> 1.2, >= 1.2.2)
@ -40,55 +40,57 @@ GEM
image_size (>= 1.5, < 3)
in_threads (~> 1.3)
progress (~> 3.0, >= 3.0.1)
image_optim_pack (0.5.1)
image_optim_pack (0.5.1.20190105)
fspath (>= 2.1, < 4)
image_optim (~> 0.19)
image_size (2.0.0)
in_threads (1.5.0)
method_source (0.9.0)
mini_portile2 (2.3.0)
in_threads (1.5.1)
method_source (0.9.2)
mini_portile2 (2.4.0)
minitest (5.11.3)
multi_json (1.13.1)
mustermann (1.0.3)
newrelic_rpm (5.4.0.347)
nokogiri (1.8.5)
mini_portile2 (~> 2.3.0)
net-sftp (3.0.0.beta1)
net-ssh (>= 5.0.0, < 6.0.0)
net-ssh (5.1.0)
newrelic_rpm (5.7.0.350)
nokogiri (1.10.1)
mini_portile2 (~> 2.4.0)
options (2.3.2)
progress (3.5.0)
progress_bar (1.3.0)
highline (>= 1.6, < 3)
options (~> 2.3.0)
pry (0.11.3)
pry (0.12.2)
coderay (~> 1.1.0)
method_source (~> 0.9.0)
rack (2.0.5)
rack-protection (2.0.4)
rack (2.0.6)
rack-protection (2.0.5)
rack
rack-ssl-enforcer (0.2.9)
rack-test (1.1.0)
rack (>= 1.0, < 3)
rake (12.3.1)
rake (12.3.2)
rb-fsevent (0.10.3)
rb-inotify (0.9.10)
ffi (>= 0.5.0, < 2)
rb-inotify (0.10.0)
ffi (~> 1.0)
rr (1.2.1)
sass (3.6.0)
sass (3.7.3)
sass-listen (~> 4.0.0)
sass-listen (4.0.0)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
sinatra (2.0.4)
sinatra (2.0.5)
mustermann (~> 1.0)
rack (~> 2.0)
rack-protection (= 2.0.4)
rack-protection (= 2.0.5)
tilt (~> 2.0)
sinatra-contrib (2.0.4)
activesupport (>= 4.0.0)
sinatra-contrib (2.0.5)
backports (>= 2.8.2)
multi_json
mustermann (~> 1.0)
rack-protection (= 2.0.4)
sinatra (= 2.0.4)
rack-protection (= 2.0.5)
sinatra (= 2.0.5)
tilt (>= 1.3, < 3)
sprockets (3.7.2)
concurrent-ruby (~> 1.0)
@ -106,22 +108,22 @@ GEM
daemons (~> 1.0, >= 1.0.9)
eventmachine (~> 1.0, >= 1.0.4)
rack (>= 1, < 3)
thor (0.20.0)
thor (0.20.3)
thread_safe (0.3.6)
tilt (2.0.8)
tty-pager (0.11.0)
strings (~> 0.1.0)
tty-screen (~> 0.6.4)
tty-which (~> 0.3.0)
tilt (2.0.9)
tty-pager (0.12.0)
strings (~> 0.1.4)
tty-screen (~> 0.6)
tty-which (~> 0.4)
tty-screen (0.6.5)
tty-which (0.3.0)
typhoeus (1.3.0)
tty-which (0.4.0)
typhoeus (1.3.1)
ethon (>= 0.9.0)
tzinfo (1.2.5)
thread_safe (~> 0.1)
uglifier (4.1.19)
uglifier (4.1.20)
execjs (>= 0.3.0, < 3)
unicode-display_width (1.4.0)
unicode-display_width (1.4.1)
unicode_utils (1.4.0)
unix_utils (0.0.15)
yajl-ruby (1.4.1)
@ -140,10 +142,11 @@ DEPENDENCIES
image_optim
image_optim_pack
minitest
net-sftp (>= 2.1.3.rc2)
newrelic_rpm
nokogiri
progress_bar
pry (~> 0.11.0)
pry (~> 0.12.0)
rack
rack-ssl-enforcer
rack-test
@ -164,7 +167,7 @@ DEPENDENCIES
yajl-ruby
RUBY VERSION
ruby 2.5.1p57
ruby 2.6.0p0
BUNDLED WITH
1.16.6
1.17.2

@ -0,0 +1,11 @@
If youre adding a new scraper, please ensure that you have:
- [ ] Tested the scraper on a local copy of DevDocs
- [ ] Ensured that the docs are styled similarly to other docs on DevDocs
<!-- If the docs dont have an icon, delete the next four items: -->
- [ ] Added these files to the <code>public/icons/*your_scraper_name*/</code> directory:
- [ ] `16.png`: a 16×16 pixel icon for the doc
- [ ] `16@2x.png`: a 32×32 pixel icon for the doc
- [ ] `SOURCE`: A text file containing the URL to the page the image can be found on or the URL of the original image itself
<!-- Replace the `[ ]` with a `[x]` once youve completed each step. -->

@ -1,25 +1,24 @@
# [DevDocs](https://devdocs.io) [![Build Status](https://travis-ci.org/freeCodeCamp/devdocs.svg?branch=master)](https://travis-ci.org/freeCodeCamp/devdocs)
# [DevDocs](https://devdocs.io) — API Documentation Browser [![Build Status](https://travis-ci.org/freeCodeCamp/devdocs.svg?branch=master)](https://travis-ci.org/freeCodeCamp/devdocs)
DevDocs combines multiple API documentations in a fast, organized, and searchable interface.
DevDocs combines multiple developer documentations in a clean and organized web UI with instant search, offline support, mobile version, dark theme, keyboard shortcuts, and more.
* Created by [Thibaut Courouble](https://thibaut.me)
DevDocs was created by [Thibaut Courouble](https://thibaut.me) and is operated by [freeCodeCamp](https://www.freecodecamp.org).
Keep track of development news:
* Join the contributor chat room on [Gitter](https://gitter.im/FreeCodeCamp/DevDocs)
* Watch the repository on [GitHub](https://github.com/freeCodeCamp/devdocs/subscription)
* Follow [@DevDocs](https://twitter.com/DevDocs) on Twitter
* Join the [mailing list](https://groups.google.com/d/forum/devdocs)
**Table of Contents:** [Quick Start](#quick-start) · [Vision](#vision) · [App](#app) · [Scraper](#scraper) · [Commands](#available-commands) · [Contributing](#contributing) · [License](#copyright--license) · [Questions?](#questions)
**Table of Contents:** [Quick Start](#quick-start) · [Vision](#vision) · [App](#app) · [Scraper](#scraper) · [Commands](#available-commands) · [Contributing](#contributing) · [Documentation](#documentation) · [Plugins and Extensions](#plugins-and-extensions) · [License](#copyright--license) · [Questions?](#questions)
## Quick Start
Unless you wish to contribute to the project, I recommend using the hosted version at [devdocs.io](https://devdocs.io). It's up-to-date and works offline out-of-the-box.
Unless you wish to contribute to the project, we recommend using the hosted version at [devdocs.io](https://devdocs.io). It's up-to-date and works offline out-of-the-box.
DevDocs is made of two pieces: a Ruby scraper that generates the documentation and metadata, and a JavaScript app powered by a small Sinatra app.
DevDocs requires Ruby 2.5.1, libcurl, and a JavaScript runtime supported by [ExecJS](https://github.com/rails/execjs#readme) (included in OS X and Windows; [Node.js](https://nodejs.org/en/) on Linux). Once you have these installed, run the following commands:
DevDocs requires Ruby 2.6.0, libcurl, and a JavaScript runtime supported by [ExecJS](https://github.com/rails/execjs#readme) (included in OS X and Windows; [Node.js](https://nodejs.org/en/) on Linux). Once you have these installed, run the following commands:
```
git clone https://github.com/freeCodeCamp/devdocs.git && cd devdocs
@ -84,12 +83,13 @@ Modifications made to each document include:
* replacing all external (not scraped) URLs with their fully qualified counterpart
* replacing all internal (scraped) URLs with their unqualified and relative counterpart
* adding content, such as a title and link to the original document
* ensuring correct syntax highlighting using [Prism](http://prismjs.com/)
These modifications are applied via a set of filters using the [HTML::Pipeline](https://github.com/jch/html-pipeline) library. Each scraper includes filters specific to itself, one of which is tasked with figuring out the pages' metadata.
The end result is a set of normalized HTML partials and two JSON files (index + offline data). Because the index files are loaded separately by the [app](#app) following the user's preferences, the scraper also creates a JSON manifest file containing information about the documentations currently available on the system (such as their name, version, update date, etc.).
More information about scrapers and filters is available on the [wiki](https://github.com/freeCodeCamp/devdocs/wiki).
More information about [scrapers](./docs/scraper-reference.md) and [filters](./docs/filter-reference.md) is available in the `docs` folder.
## Available Commands
@ -129,20 +129,45 @@ If multiple versions of Ruby are installed on your system, commands must be run
## Contributing
Contributions are welcome. Please read the [contributing guidelines](https://github.com/freeCodeCamp/devdocs/blob/master/CONTRIBUTING.md).
DevDocs's own documentation is available on the [wiki](https://github.com/freeCodeCamp/devdocs/wiki).
Contributions are welcome. Please read the [contributing guidelines](./.github/CONTRIBUTING.md).
## Documentation
* [Adding documentations to DevDocs](./docs/adding-docs.md)
* [Scraper Reference](./docs/scraper-reference.md)
* [Filter Reference](./docs/filter-reference.md)
* [Maintainers Guide](./docs/maintainers.md)
## Plugins and Extensions
* [Chrome web app](https://chrome.google.com/webstore/detail/devdocs/mnfehgbmkapmjnhcnbodoamcioleeooe)
* [Ubuntu Touch app](https://uappexplorer.com/app/devdocsunofficial.berkes)
* [Sublime Text plugin](https://sublime.wbond.net/packages/DevDocs)
* [Atom plugin](https://atom.io/packages/devdocs)
* [Brackets extension](https://github.com/gruehle/dev-docs-viewer)
* [Fluid](http://fluidapp.com) for turning DevDocs into a real OS X app
* [GTK shell / Vim integration](https://github.com/naquad/devdocs-shell)
* [Emacs lookup](https://github.com/skeeto/devdocs-lookup)
* [Alfred Workflow](https://github.com/yannickglt/alfred-devdocs)
* [Vim search plugin with Devdocs in its defaults](https://github.com/waiting-for-dev/vim-www) Just set `let g:www_shortcut_engines = { 'devdocs': ['Devdocs', '<leader>dd'] }` to have a `:Devdocs` command and a `<leader>dd` mapping.
* [Visual Studio Code plugin](https://marketplace.visualstudio.com/items?itemName=akfish.vscode-devdocs ) (1)
* [Visual Studio Code plugin](https://marketplace.visualstudio.com/items?itemName=deibit.devdocs) (2)
* [Desktop application](https://github.com/egoist/devdocs-desktop)
* [Doc Browser](https://github.com/qwfy/doc-browser) is a native Linux app that supports DevDocs docsets
* [GNOME Application](https://github.com/hardpixel/devdocs-desktop) GTK3 application with search integrated in headerbar
* [macOS Application](https://github.com/dteoh/devdocs-macos)
* [Android Application](https://github.com/Merith-TK/devdocs_webapp_kotlin) is a fully working, advanced WebView with AppCache enabled
## Copyright / License
Copyright 2013-2018 Thibaut Courouble and [other contributors](https://github.com/freeCodeCamp/devdocs/graphs/contributors)
Copyright 2013-2019 Thibaut Courouble and [other contributors](https://github.com/freeCodeCamp/devdocs/graphs/contributors)
This software is licensed under the terms of the Mozilla Public License v2.0. See the [COPYRIGHT](https://github.com/freeCodeCamp/devdocs/blob/master/COPYRIGHT) and [LICENSE](https://github.com/freeCodeCamp/devdocs/blob/master/LICENSE) files.
This software is licensed under the terms of the Mozilla Public License v2.0. See the [COPYRIGHT](./COPYRIGHT) and [LICENSE](./LICENSE) files.
Please do not use the name DevDocs to endorse or promote products derived from this software without my permission, except as may be necessary to comply with the notice/attribution requirements.
Please do not use the name DevDocs to endorse or promote products derived from this software without the maintainers' permission, except as may be necessary to comply with the notice/attribution requirements.
I also wish that any documentation file generated using this software be attributed to DevDocs. Let's be fair to all contributors by giving credit where credit's due. Thanks!
We also wish that any documentation file generated using this software be attributed to DevDocs. Let's be fair to all contributors by giving credit where credit's due. Thanks!
## Questions?
If you have any questions, please feel free to ask them on the [mailing list](https://groups.google.com/d/forum/devdocs).
If you have any questions, please feel free to ask them on the contributor chat room on [Gitter](https://gitter.im/FreeCodeCamp/DevDocs).

@ -3,6 +3,7 @@
require 'bundler/setup'
require 'thor'
Bundler.require :default
$LOAD_PATH.unshift 'lib'
task :default do
@ -13,6 +14,9 @@ end
namespace :assets do
desc 'Compile all assets'
task :precompile do
load 'tasks/docs.thor'
DocsCLI.new.prepare_deploy
load 'tasks/assets.thor'
AssetsCLI.new.compile
end

@ -157,10 +157,17 @@
new app.views.Updates()
@updateChecker = new app.UpdateChecker()
reboot: ->
if location.pathname isnt '/' and location.pathname isnt '/settings'
window.location = "/##{location.pathname}"
else
window.location = '/'
return
reload: ->
@docs.clearCache()
@disabledDocs.clearCache()
if @appCache then @appCache.reload() else window.location = '/'
if @appCache then @appCache.reload() else @reboot()
return
reset: ->

@ -27,7 +27,7 @@ class app.AppCache
return
reload: ->
$.on @cache, 'updateready noupdate error', -> window.location = '/'
$.on @cache, 'updateready noupdate error', -> app.reboot()
@notifyUpdate = false
@notifyProgress = true
try @cache.update() catch

@ -1,5 +1,5 @@
###
* Copyright 2013-2018 Thibaut Courouble and other contributors
* Copyright 2013-2019 Thibaut Courouble and other contributors
*
* This source code is licensed under the terms of the Mozilla
* Public License, v. 2.0, a copy of which may be obtained at:

@ -62,12 +62,12 @@ app.templates.unsupportedBrowser = """
<li>iOS 10+
</ul>
<p class="_fail-text">
If you're unable to upgrade, I apologize.
I decided to prioritize speed and new features over support for older browsers.
If you're unable to upgrade, we apologize.
We decided to prioritize speed and new features over support for older browsers.
<p class="_fail-text">
Note: if you're already using one of the browsers above, check your settings and add-ons.
The app uses feature detection, not user agent sniffing.
<p class="_fail-text">
&mdash; Thibaut <a href="https://twitter.com/DevDocs" class="_fail-link">@DevDocs</a>
&mdash; <a href="https://twitter.com/DevDocs">@DevDocs</a>
</div>
"""

@ -11,22 +11,18 @@ app.templates.aboutPage = -> """
</nav>
<h1 class="_lined-heading">DevDocs: API Documentation Browser</h1>
<p>DevDocs combines multiple API documentations in a clean and organized web UI with instant search, offline support, mobile version, dark theme, keyboard shortcuts, and more.
<ul>
<li>Created and maintained by <a href="https://thibaut.me">Thibaut Courouble</a>
<li>Free and <a href="https://github.com/freeCodeCamp/devdocs">open source</a>
<iframe class="_github-btn" src="https://ghbtns.com/github-btn.html?user=freeCodeCamp&repo=devdocs&type=watch&count=true" allowtransparency="true" frameborder="0" scrolling="0" width="100" height="20" tabindex="-1"></iframe>
</ul>
<p>DevDocs combines multiple developer documentations in a clean and organized web UI with instant search, offline support, mobile version, dark theme, keyboard shortcuts, and more.
<p>DevDocs is free and <a href="https://github.com/freeCodeCamp/devdocs">open source</a>. It was created by <a href="https://thibaut.me">Thibaut Courouble</a> and is operated by <a href="https://www.freecodecamp.org/">freeCodeCamp</a>.
<p>To keep up-to-date with the latest news:
<ul>
<li>Follow <a href="https://twitter.com/DevDocs">@DevDocs</a> on Twitter
<li>Watch the repository on <a href="https://github.com/freeCodeCamp/devdocs/subscription">GitHub</a>
<li>Join the <a href="https://groups.google.com/d/forum/devdocs">mailing list</a>
<li>Watch the repository on <a href="https://github.com/freeCodeCamp/devdocs/subscription">GitHub</a> <iframe class="_github-btn" src="https://ghbtns.com/github-btn.html?user=freeCodeCamp&repo=devdocs&type=watch&count=true" allowtransparency="true" frameborder="0" scrolling="0" width="100" height="20" tabindex="-1"></iframe>
<li>Join the <a href="https://gitter.im/FreeCodeCamp/DevDocs">Gitter</a> chat room
</ul>
<h2 class="_block-heading" id="copyright">Copyright and License</h2>
<p class="_note">
<strong>Copyright 2013&ndash;2018 Thibaut Courouble and <a href="https://github.com/freeCodeCamp/devdocs/graphs/contributors">other contributors</a></strong><br>
<strong>Copyright 2013&ndash;2019 Thibaut Courouble and <a href="https://github.com/freeCodeCamp/devdocs/graphs/contributors">other contributors</a></strong><br>
This software is licensed under the terms of the Mozilla Public License v2.0.<br>
You may obtain a copy of the source code at <a href="https://github.com/freeCodeCamp/devdocs">github.com/freeCodeCamp/devdocs</a>.<br>
For more information, see the <a href="https://github.com/freeCodeCamp/devdocs/blob/master/COPYRIGHT">COPYRIGHT</a>
@ -48,11 +44,10 @@ app.templates.aboutPage = -> """
<dt>Where can I suggest new docs and features?
<dd>You can suggest and vote for new docs on the <a href="https://trello.com/b/6BmTulfx/devdocs-documentation">Trello board</a>.<br>
If you have a specific feature request, add it to the <a href="https://github.com/freeCodeCamp/devdocs/issues">issue tracker</a>.<br>
Otherwise use the <a href="https://groups.google.com/d/forum/devdocs">mailing list</a>.
Otherwise, come talk to us in the <a href="https://gitter.im/FreeCodeCamp/DevDocs">Gitter</a> chat room.
<dt>Where can I report bugs?
<dd>In the <a href="https://github.com/freeCodeCamp/devdocs/issues">issue tracker</a>. Thanks!
</dl>
<p>For anything else, feel free to email me at <a href="mailto:thibaut@devdocs.io">thibaut@devdocs.io</a>.
<h2 class="_block-heading" id="credits">Credits</h2>
@ -76,12 +71,12 @@ app.templates.aboutPage = -> """
<h2 class="_block-heading" id="privacy">Privacy Policy</h2>
<ul>
<li><a href="https://devdocs.io">devdocs.io</a> ("App") is operated by Thibaut Courouble ("We").
<li>We do not collect personal information.
<li><a href="https://devdocs.io">devdocs.io</a> ("App") is operated by <a href="https://www.freecodecamp.org/">freeCodeCamp</a> ("We").
<li>We do not collect personal information through the app.
<li>We use Google Analytics, Gauges and Sentry to collect anonymous traffic information and improve the app.
<li>The app uses cookies to store user preferences.
<li>By using the app, you signify your acceptance of this policy. If you do not agree to this policy, please do not use the app.
<li>If you have any questions regarding privacy, please email <a href="mailto:thibaut@devdocs.io">thibaut@devdocs.io</a>.
<li>If you have any questions regarding privacy, please email <a href="mailto:privacy@freecodecamp.org">privacy@freecodecamp.org</a>.
</ul>
"""
@ -102,7 +97,7 @@ credits = [
'https://www.apache.org/licenses/LICENSE-2.0'
], [
'Async',
'2010-2017 Caolan McMahon',
'2010-2018 Caolan McMahon',
'MIT',
'https://raw.githubusercontent.com/caolan/async/master/LICENSE'
], [
@ -192,7 +187,7 @@ credits = [
'https://raw.githubusercontent.com/jashkenas/coffeescript/master/LICENSE'
], [
'Cordova',
'2012-2017 The Apache Software Foundation',
'2012-2018 The Apache Software Foundation',
'Apache',
'https://raw.githubusercontent.com/apache/cordova-docs/master/LICENSE'
], [
@ -372,7 +367,7 @@ credits = [
'https://raw.githubusercontent.com/jquery/api.jqueryui.com/master/LICENSE.txt'
], [
'Julia',
'2009-2016 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors',
'2009-2018 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors',
'MIT',
'https://raw.githubusercontent.com/JuliaLang/julia/master/LICENSE.md'
], [
@ -442,7 +437,7 @@ credits = [
'https://daringfireball.net/projects/markdown/license'
], [
'Matplotlib',
'2012-2017 Matplotlib Development Team. All rights reserved.',
'2012-2018 Matplotlib Development Team. All rights reserved.',
'Custom',
'https://raw.githubusercontent.com/matplotlib/matplotlib/master/LICENSE/LICENSE'
], [
@ -647,7 +642,7 @@ credits = [
'http://scikit-image.org/docs/dev/license.html'
], [
'scikit-learn',
'2007-2017 The scikit-learn developers',
'2007-2018 The scikit-learn developers',
'BSD',
'https://raw.githubusercontent.com/scikit-learn/scikit-learn/master/COPYING'
], [
@ -692,9 +687,9 @@ credits = [
'https://raw.githubusercontent.com/hashicorp/terraform-website/master/LICENSE.md'
], [
'Twig',
'2009-2017 The Twig Team',
'2009-2018 The Twig Team',
'BSD',
'https://twig.sensiolabs.org/license'
'https://twig.symfony.com/license'
], [
'TypeScript',
'Microsoft and other contributors',
@ -702,7 +697,7 @@ credits = [
'https://raw.githubusercontent.com/Microsoft/TypeScript-Handbook/master/LICENSE'
], [
'Underscore.js',
'2009-2017 Jeremy Ashkenas, DocumentCloud and Investigative Reporters & Editors',
'2009-2018 Jeremy Ashkenas, DocumentCloud and Investigative Reporters & Editors',
'MIT',
'https://raw.githubusercontent.com/jashkenas/underscore/master/LICENSE'
], [

@ -14,7 +14,7 @@ app.templates.intro = """
<li>Run <code>thor docs:download --installed</code> to update all downloaded documentations.
<li>To be notified about new versions, don't forget to <a href="https://github.com/freeCodeCamp/devdocs/subscription">watch the repository</a> on GitHub.
<li>The <a href="https://github.com/freeCodeCamp/devdocs/issues">issue tracker</a> is the preferred channel for bug reports and
feature requests. For everything else, use the <a href="https://groups.google.com/d/forum/devdocs">mailing list</a>.
feature requests. For everything else, use <a href="https://gitter.im/FreeCodeCamp/DevDocs">Gitter</a>.
<li>Contributions are welcome. See the <a href="https://github.com/freeCodeCamp/devdocs/blob/master/CONTRIBUTING.md">guidelines</a>.
<li>DevDocs is licensed under the terms of the Mozilla Public License v2.0. For more information,
see the <a href="https://github.com/freeCodeCamp/devdocs/blob/master/COPYRIGHT">COPYRIGHT</a> and

@ -57,6 +57,7 @@ class app.views.OfflinePage extends app.View
doc[action](@onInstallSuccess.bind(@, doc), @onInstallError.bind(@, doc), @onInstallProgress.bind(@, doc))
el.parentNode.innerHTML = "#{el.textContent.replace(/e$/, '')}ing…"
else if action = el.getAttribute('data-action-all')
return unless action isnt 'uninstall' or window.confirm('Uninstall all docs?')
app.db.migrate()
$.click(el) for el in @findAll("[data-action='#{action}']")
return

@ -77,7 +77,7 @@ class app.views.Document extends app.View
switch target.getAttribute('data-behavior')
when 'back' then history.back()
when 'reload' then window.location.reload()
when 'reboot' then window.location = '/'
when 'reboot' then app.reboot()
when 'hard-reload' then app.reload()
when 'reset' then app.reset() if confirm('Are you sure you want to reset DevDocs?')
return

@ -3,7 +3,7 @@
//= depend_on sprites/docs.json
/*!
* Copyright 2013-2018 Thibaut Courouble and other contributors
* Copyright 2013-2019 Thibaut Courouble and other contributors
*
* This source code is licensed under the terms of the Mozilla
* Public License, v. 2.0, a copy of which may be obtained at:

@ -460,4 +460,5 @@
display: inline-block;
vertical-align: text-top;
margin-left: .25rem;
background: inherit;
}

@ -32,5 +32,3 @@
}
._fail-text:last-child { margin: 0; }
._fail-link { float: right; }

@ -30,6 +30,7 @@
.notice,
.warning,
.overheadIndicator,
.blockIndicator,
.syntaxbox, // CSS, JavaScript
.twopartsyntaxbox, // CSS
.inheritsbox, // JavaScript
@ -104,4 +105,28 @@
.cleared { clear: both; } // CSS/box-shadow
code > strong { font-weight: normal; }
// Compatibility tablees
.bc-github-link {
float: right;
font-size: .75rem;
}
.bc-supports-yes, .bc-supports-yes + dd, .bc-supports-yes + dd + dd { background: var(--noteGreenBackground); }
.bc-supports-partial, .bc-supports-partial + dd, .bc-supports-partial + dd + dd { background: var(--noteOrangeBackground); }
.bc-supports-no, .bc-supports-no + dd, .bc-supports-no + dd + dd { background: var(--noteRedBackground); }
.bc-table {
min-width: 100%;
dl {
margin: .25rem 0 0;
padding: .25rem 0 0;
font-size: .75rem;
border-top: 1px solid var(--boxBorder);
}
dd { margin: 0; }
}
}

@ -20,5 +20,8 @@
margin: 0 0 1em 1em;
@extend %label;
}
.srclink { float: right; }
details > table { margin: 0; }
}

@ -33,19 +33,8 @@
}
}
.method-description { position: relative; }
.method-source-code {
display: none;
position: absolute;
z-index: 1;
top: 0;
left: -1em;
right: 0;
background: var(--contentBackground);
box-shadow: 0 1em 1em 1em var(--contentBackground);
> pre { margin: 0; }
}
// Rails guides

@ -0,0 +1,23 @@
Adding a documentation may look like a daunting task but once you get the hang of it, it's actually quite simple. Don't hesitate to ask for help [in Gitter](https://gitter.im/FreeCodeCamp/DevDocs) if you ever get stuck.
**Note:** please read the [contributing guidelines](../.github/CONTRIBUTING.md) before submitting a new documentation.
1. Create a subclass of `Docs::UrlScraper` or `Docs::FileScraper` in the `lib/docs/scrapers/` directory. Its name should be the [PascalCase](http://api.rubyonrails.org/classes/String.html#method-i-camelize) equivalent of the filename (e.g. `my_doc``MyDoc`)
2. Add the appropriate class attributes and filter options (see the [Scraper Reference](./scraper-reference.md) page).
3. Check that the scraper is listed in `thor docs:list`.
4. Create filters specific to the scraper in the `lib/docs/filters/[my_doc]/` directory and add them to the class's [filter stacks](./scraper-reference.md#filter-stacks). You may create any number of filters but will need at least the following two:
* A [`CleanHtml`](./filter-reference.md#cleanhtmlfilter) filter whose task is to clean the HTML markup (e.g. adding `id` attributes to headings) and remove everything superfluous and/or nonessential.
* An [`Entries`](./filter-reference.md#entriesfilter) filter whose task is to determine the pages' metadata (the list of entries, each with a name, type and path).
The [Filter Reference](./filter-reference.md) page has all the details about filters.
5. Using the `thor docs:page [my_doc] [path]` command, check that the scraper works properly. Files will appear in the `public/docs/[my_doc]/` directory (but not inside the app as the command doesn't touch the index). `path` in this case refers to either the remote path (if using `UrlScraper`) or the local path (if using `FileScraper`).
6. Generate the full documentation using the `thor docs:generate [my_doc] --force` command. Additionally, you can use the `--verbose` option to see which files are being created/updated/deleted (useful to see what changed since the last run), and the `--debug` option to see which URLs are being requested and added to the queue (useful to pin down which page adds unwanted URLs to the queue).
7. Start the server, open the app, enable the documentation, and see how everything plays out.
8. Tweak the scraper/filters and repeat 5) and 6) until the pages and metadata are ok.
9. To customize the pages' styling, create an SCSS file in the `assets/stylesheets/pages/` directory and import it in both `application.css.scss` AND `application-dark.css.scss`. Both the file and CSS class should be named `_[type]` where [type] is equal to the scraper's `type` attribute (documentations with the same type share the same custom CSS and JS). Setting the type to `simple` will apply the general styling rules in `assets/stylesheets/pages/_simple.scss`, which can be used for documentations where little to no CSS changes are needed.
10. To add syntax highlighting or execute custom JavaScript on the pages, create a file in the `assets/javascripts/views/pages/` directory (take a look at the other files to see how it works).
11. Add the documentation's icon in the `public/icons/docs/[my_doc]/` directory, in both 16x16 and 32x32-pixels formats. It'll be added to the icon spritesheet after your pull request is merged.
12. Add the documentation's copyright details to the list in `assets/javascripts/templates/pages/about_tmpl.coffee`. This is the data shown in the table on the [about](https://devdocs.io/about) page, and is ordered alphabetically. Simply copying an existing item, placing it in the right slot and updating the values to match the new scraper will do the job.
If the documentation includes more than a few hundreds pages and is available for download, try to scrape it locally (e.g. using `FileScraper`). It'll make the development process much faster and avoids putting too much load on the source site. (It's not a problem if your scraper is coupled to your local setup, just explain how it works in your pull request.)
Finally, try to document your scraper and filters' behavior as much as possible using comments (e.g. why some URLs are ignored, HTML markup removed, metadata that way, etc.). It'll make updating the documentation much easier.

@ -0,0 +1,224 @@
**Table of contents:**
* [Overview](#overview)
* [Instance methods](#instance-methods)
* [Core filters](#core-filters)
* [Custom filters](#custom-filters)
- [CleanHtmlFilter](#cleanhtmlfilter)
- [EntriesFilter](#entriesfilter)
## Overview
Filters use the [HTML::Pipeline](https://github.com/jch/html-pipeline) library. They take an HTML string or [Nokogiri](http://nokogiri.org/) node as input, optionally perform modifications and/or extract information from it, and then outputs the result. Together they form a pipeline where each filter hands its output to the next filter's input. Every documentation page passes through this pipeline before being copied on the local filesystem.
Filters are subclasses of the [`Docs::Filter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/filter.rb) class and require a `call` method. A basic implementation looks like this:
```ruby
module Docs
class CustomFilter < Filter
def call
doc
end
end
end
```
Filters which manipulate the Nokogiri node object (`doc` and related methods) are _HTML filters_ and must not manipulate the HTML string (`html`). Vice-versa, filters which manipulate the string representation of the document are _text filters_ and must not manipulate the Nokogiri node object. The two types are divided into two stacks within the scrapers. These stacks are then combined into a pipeline that calls the HTML filters before the text filters (more details [here](./scraper-reference.md#filter-stacks)). This is to avoid parsing the document multiple times.
The `call` method must return either `doc` or `html`, depending on the type of filter.
## Instance methods
* `doc` [Nokogiri::XML::Node]
The Nokogiri representation of the container element.
See [Nokogiri's API docs](http://www.rubydoc.info/github/sparklemotion/nokogiri/Nokogiri/XML/Node) for the list of available methods.
* `html` [String]
The string representation of the container element.
* `context` [Hash] **(frozen)**
The scraper's `options` along with a few additional keys: `:base_url`, `:root_url`, `:root_page` and `:url`.
* `result` [Hash]
Used to store the page's metadata and pass back information to the scraper.
Possible keys:
- `:path` — the page's normalized path
- `:store_path` — the path where the page will be stored (equal to `:path` with `.html` at the end)
- `:internal_urls` — the list of distinct internal URLs found within the page
- `:entries` — the [`Entry`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/models/entry.rb) objects to add to the index
* `css`, `at_css`, `xpath`, `at_xpath`
Shortcuts for `doc.css`, `doc.xpath`, etc.
* `base_url`, `current_url`, `root_url` [Docs::URL]
Shortcuts for `context[:base_url]`, `context[:url]`, and `context[:root_url]` respectively.
* `root_path` [String]
Shortcut for `context[:root_path]`.
* `subpath` [String]
The sub-path from the base URL of the current URL.
_Example: if `base_url` equals `example.com/docs` and `current_url` equals `example.com/docs/file?raw`, the returned value is `/file`._
* `slug` [String]
The `subpath` removed of any leading slash or `.html` extension.
_Example: if `subpath` equals `/dir/file.html`, the returned value is `dir/file`._
* `root_page?` [Boolean]
Returns `true` if the current page is the root page.
* `initial_page?` [Boolean]
Returns `true` if the current page is the root page or its subpath is one of the scraper's `initial_paths`.
## Core filters
* [`ContainerFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/container.rb) — changes the root node of the document (remove everything outside)
* [`CleanHtmlFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_html.rb) — removes HTML comments, `<script>`, `<style>`, etc.
* [`NormalizeUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/normalize_urls.rb) — replaces all URLs with their fully qualified counterpart
* [`InternalUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/internal_urls.rb) — detects internal URLs (the ones to scrape) and replaces them with their unqualified, relative counterpart
* [`NormalizePathsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/normalize_paths.rb) — makes the internal paths consistent (e.g. always end with `.html`)
* [`CleanLocalUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_local_urls.rb) — removes links, iframes and images pointing to localhost (`FileScraper` only)
* [`InnerHtmlFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/inner_html.rb) — converts the document to a string
* [`CleanTextFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_text.rb) — removes empty nodes
* [`AttributionFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/attribution.rb) — appends the license info and link to the original document
* [`TitleFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/title.rb) — prepends the document with a title (disabled by default)
* [`EntriesFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/entries.rb) — abstract filter for extracting the page's metadata
## Custom filters
Scrapers can have any number of custom filters but require at least the two described below.
**Note:** filters are located in the [`lib/docs/filters`](https://github.com/Thibaut/devdocs/tree/master/lib/docs/filters/) directory. The class's name must be the [CamelCase](http://api.rubyonrails.org/classes/String.html#method-i-camelize) equivalent of the filename.
### `CleanHtmlFilter`
The `CleanHtml` filter is tasked with cleaning the HTML markup where necessary and removing anything superfluous or nonessential. Only the core documentation should remain at the end.
Nokogiri's many jQuery-like methods make it easy to search and modify elements — see the [API docs](http://www.rubydoc.info/github/sparklemotion/nokogiri/Nokogiri/XML/Node).
Here's an example implementation that covers the most common use-cases:
```ruby
module Docs
class MyScraper
class CleanHtmlFilter < Filter
def call
css('hr').remove
css('#changelog').remove if root_page?
# Set id attributes on <h3> instead of an empty <a>
css('h3').each do |node|
node['id'] = node.at_css('a')['id']
end
# Make proper table headers
css('td.header').each do |node|
node.name = 'th'
end
# Remove code highlighting
css('pre').each do |node|
node.content = node.content
end
doc
end
end
end
end
```
**Notes:**
* Empty elements will be automatically removed by the core `CleanTextFilter` later in the pipeline's execution.
* Although the goal is to end up with a clean version of the page, try to keep the number of modifications to a minimum, so as to make the code easier to maintain. Custom CSS is the preferred way of normalizing the pages (except for hiding stuff which should always be done by removing the markup).
* Try to document your filter's behavior as much as possible, particularly modifications that apply only to a subset of pages. It'll make updating the documentation easier.
### `EntriesFilter`
The `Entries` filter is responsible for extracting the page's metadata, represented by a set of _entries_, each with a name, type and path.
The following two models are used under the hood to represent the metadata:
* [`Entry(name, type, path)`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/models/entry.rb)
* [`Type(name, slug, count)`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/models/type.rb)
Each scraper must implement its own `EntriesFilter` by subclassing the [`Docs::EntriesFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/entries.rb) class. The base class already implements the `call` method and includes four methods which the subclasses can override:
* `get_name` [String]
The name of the default entry (aka. the page's name).
It is usually guessed from the `slug` (documented above) or by searching the HTML markup.
**Default:** modified version of `slug` (underscores are replaced with spaces and forward slashes with dots)
* `get_type` [String]
The type of the default entry (aka. the page's type).
Entries without a type can be searched for but won't be listed in the app's sidebar (unless no other entries have a type).
**Default:** `nil`
* `include_default_entry?` [Boolean]
Whether to include the default entry.
Used when a page consists of multiple entries (returned by `additional_entries`) but doesn't have a name/type of its own, or to remove a page from the index (if it has no additional entries), in which case it won't be copied on the local filesystem and any link to it in the other pages will be broken (as explained on the [Scraper Reference](./scraper-reference.md) page, this is used to keep the `:skip` / `:skip_patterns` options to a maintainable size, or if the page includes links that can't reached from anywhere else).
**Default:** `true`
* `additional_entries` [Array]
The list of additional entries.
Each entry is represented by an Array of three attributes: its name, fragment identifier, and type. The fragment identifier refers to the `id` attribute of the HTML element (usually a heading) that the entry relates to. It is combined with the page's path to become the entry's path. If absent or `nil`, the page's path is used. If the type is absent or `nil`, the default `type` is used.
Example: `[ ['One'], ['Two', 'id'], ['Three', nil, 'type'] ]` adds three additional entries, the first one named "One" with the default path and type, the second one named "Two" with the URL fragment "#id" and the default type, and the third one named "Three" with the default path and the type "type".
The list is usually constructed by running through the markup. Exceptions can also be hard-coded for specific pages.
**Default:** `[]`
The following accessors are also available, but must not be overridden:
* `name` [String]
Memoized version of `get_name` (`nil` for the root page).
* `type` [String]
Memoized version of `get_type` (`nil` for the root page).
**Notes:**
* Leading and trailing whitespace is automatically removed from names and types.
* Names must be unique across the documentation and as short as possible (ideally less than 30 characters). Whenever possible, methods should be differentiated from properties by appending `()`, and instance methods should be differentiated from class methods using the `Class#method` or `object.method` conventions.
* You can call `name` from `get_type` or `type` from `get_name` but doing both will cause a stack overflow (i.e. you can infer the name from the type or the type from the name, but you can't do both at the same time). Don't call `get_name` or `get_type` directly as their value isn't memoized.
* The root page has no name and no type (both are `nil`). `get_name` and `get_type` won't get called with the page (but `additional_entries` will).
* `Docs::EntriesFilter` is an _HTML filter_. It must be added to the scraper's `html_filters` stack.
* Try to document the code as much as possible, particularly the special cases. It'll make updating the documentation easier.
**Example:**
```ruby
module Docs
class MyScraper
class EntriesFilter < Docs::EntriesFilter
def get_name
node = at_css('h1')
result = node.content.strip
result << ' event' if type == 'Events'
result << '()' if node['class'].try(:include?, 'function')
result
end
def get_type
object, method = *slug.split('/')
method ? object : 'Miscellaneous'
end
def additional_entries
return [] if root_page?
css('h2').map do |node|
[node.content, node['id']]
end
end
def include_default_entry?
!at_css('.obsolete')
end
end
end
end
```
return [[Home]]

@ -0,0 +1,103 @@
# Maintainer's Guide
This document is intended for [DevDocs maintainers](#list-of-maintainers).
## Merging pull requests
- PRs should be approved by at least one maintainer before being merged.
- PRs that add or update documentations should always be merged locally, and the app deployed, before the merge is pushed to GitHub.
This workflow is required because there is a dependency between the local and production environments. The `thor docs:download` command downloads documentations from production files uploaded by the `thor docs:upload` command. If a PR adding a new documentation is merged and pushed to GitHub before the files have been uploaded to production, the `thor docs:download` will fail for the new documentation and the docker container will not build properly until the new documentation is deployed to production.
## Updating docs
The process for updating docs is as follow:
- Make version/release changes in the scraper file.
- If needed, update the copyright notice of the documentation in the scraper file (`options[:attribution]`) and the about page (`about_tmpl.coffee`). The copyright notice must be the same as the one on the original documentation.
- Run `thor docs:generate`.
- Make sure the documentation still works well. The `thor docs:generate` command outputs a summary of the changes, which helps identifying issues (e.g. deleted files) and new pages to check out in the app. Verify locally that everything works, especially the files that were created (if any), and that the categorization of entries is still good. Often, updates will require code changes to tweak some new markup in the source website or categorize new entries.
- Commit the changes (protip: use the `thor docs:commit` command documented below).
- Optional: do more updates.
- Run `thor docs:upload` (documented below).
- [Deploy the app](#deploying-devdocs) and verify that everything works in production.
- Push to GitHub.
- Run `thor docs:clean` (documented below).
Note: changes to `public/docs/docs.json` should never be committed. This file reflects which documentations have been downloaded or generated locally, which is always none on a fresh `git clone`.
## Setup requirements
In order to deploy DevDocs, you must:
- be given access to Heroku, [configure the Heroku CLI](https://devcenter.heroku.com/articles/heroku-cli) on your computer, and familiarize yourself with Heroku's UI and CLI, as well as that of New Relic (accessible through [the Heroku dashboard](https://dashboard.heroku.com/apps/devdocs)).
- be given access to DevDocs's [Sentry instance](https://sentry.io/devdocs/devdocs-js/) (for JS error tracking) and familiarize yourself with its UI.
- be provided with DevDocs's S3 credentials, and install (`brew install awscli` on macOS) and [configure](https://docs.aws.amazon.com/cli/latest/reference/configure/) the AWS CLI on your computer. The configuration must add a named profile called "devdocs":
```
aws configure --profile devdocs
```
- be provided with DevDocs's MaxCDN push zone credentials, and add them to your `.bash_profile` as such:
```
export DEVDOCS_DL_USERNAME="username"
export DEVDOCS_DL_PASSWORD="password"
```
## Thor commands
In addition to the [publicly-documented commands](https://github.com/freeCodeCamp/devdocs#available-commands), the following commands are aimed at DevDocs maintainers:
- `thor docs:package`
Generates packages for one or more documentations. Those packages are intended to be uploaded to DevDocs's MaxCDN push zone by maintainers via the `thor docs:upload` command, and downloaded by users via the `thor docs:download` command.
Versions can be specified as such: `thor docs:package rails@5.2 node@10\ LTS`.
Packages can also be automatically generated during the scraping process by passing the `--package` option to `thor docs:generate`.
- `thor docs:upload`
This command does two operations:
1. sync the files for the specified documentations with S3 (used by the Heroku app);
2. upload the documentations' packages to DevDocs's MaxCDN push zone (used by the `thor docs:download` command).
For the command to work, you must have the AWS CLI and MaxCDN credentials configured as indicated above.
**Important:** the app should always be deployed immediately after this command has finished running. Do not run this command unless you are able and ready to deploy DevDocs.
To upload all documentations that are packaged on your computer, run `thor docs:upload --packaged`.
To test your configuration and the effect of this command without uploading anything, pass the `--dryrun` option.
- `thor docs:commit`
Shortcut command to create a Git commit for a given documentation once it has been updated. Scraper and `assets/` file changes will be committed. The commit message will include the most recent version that the documentation was updated to. If some files were missed by the commit, use `git commit --amend` to add them to the commit. The command may be run before `thor docs:upload` is run, but the commit should not be pushed to GitHub before the files have been uploaded and the app deployed.
- `thor docs:clean`
Shortcut command to delete all package files (once uploaded via `thor docs:upload`, they are not needed anymore).
## Deploying DevDocs
Once docs have been uploaded via `thor docs:upload` (if applicable), deploying DevDocs is as simple as running `git push heroku master`. See [Heroku's documentation](https://devcenter.heroku.com/articles/git) for more information.
- If you're deploying documentation updates, verify that the documentations work properly once the deploy is done (you will need to reload [devdocs.io](https://devdocs.io/) a couple times for the application cache to update and the new version to load).
- If you're deploying frontend changes, monitor [Sentry](https://sentry.io/devdocs/devdocs-js/) for new JS errors once the deploy is done.
- If you're deploying server changes, monitor New Relic (accessible through [the Heroku dashboard](https://dashboard.heroku.com/apps/devdocs)) for Ruby exceptions and throughput or response time changes once the deploy is done.
If any issue arises, run `heroku rollback` to rollback to the previous of the app (this can also be done via Heroku's UI). Note that this will not revert changes made to documentation files that were uploaded via `thor docs:upload`. Try and fix the issue as quickly as possible, then re-deploy the app. Reach out to other maintainers if you need help.
If this is your first deploy, make sure another maintainer is around to assist.
## List of maintainers
- [Jed Fox](https://github.com/j-f1)
- [Jasper van Merle](https://github.com/jmerle)
- [Ahmad Abdolsaheb](https://github.com/ahmadabdolsaheb)
- [Mrugesh Mohapatra](https://github.com/raisedadead)
- [Thibaut Courouble](https://github.com/thibaut)
Interested in helping maintain DevDocs? Come talk to us on [Gitter](https://gitter.im/FreeCodeCamp/DevDocs) :)

@ -0,0 +1,186 @@
**Table of contents:**
* [Overview](#overview)
* [Configuration](#configuration)
- [Attributes](#attributes)
- [Filter stacks](#filter-stacks)
- [Filter options](#filter-options)
## Overview
Starting from a root URL, scrapers recursively follow links that match a set of rules, passing each valid response through a chain of filters before writing the file on the local filesystem. They also create an index of the pages' metadata (determined by one filter), which is dumped into a JSON file at the end.
Scrapers rely on the following libraries:
* [Typhoeus](https://github.com/typhoeus/typhoeus) for making HTTP requests
* [HTML::Pipeline](https://github.com/jch/html-pipeline) for applying filters
* [Nokogiri](http://nokogiri.org/) for parsing HTML
There are currently two kinds of scrapers: [`UrlScraper`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/scrapers/url_scraper.rb) which downloads files via HTTP and [`FileScraper`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/scrapers/file_scraper.rb) which reads them from the local filesystem. They function almost identically (both use URLs), except that `FileScraper` substitutes the base URL with a local path before reading a file. `FileScraper` uses the placeholder `localhost` base URL by default and includes a filter to remove any URL pointing to it at the end.
To be processed, a response must meet the following requirements:
* 200 status code
* HTML content type
* effective URL (after redirection) contained in the base URL (explained below)
(`FileScraper` only checks if the file exists and is not empty.)
Each URL is requested only once (case-insensitive).
## Configuration
Configuration is done via class attributes and divided into three main categories:
* [Attributes](#attributes) — essential information such as name, version, URL, etc.
* [Filter stacks](#filter-stacks) — the list of filters that will be applied to each page.
* [Filter options](#filter-options) — the options passed to said filters.
**Note:** scrapers are located in the [`lib/docs/scrapers`](https://github.com/Thibaut/devdocs/tree/master/lib/docs/scrapers/) directory. The class's name must be the [CamelCase](http://api.rubyonrails.org/classes/String.html#method-i-camelize) equivalent of the filename.
### Attributes
* `name` [String]
Must be unique.
Defaults to the class's name.
* `slug` [String]
Must be unique, lowercase, and not include dashes (underscores are ok).
Defaults to `name` lowercased.
* `type` [String] **(required, inherited)**
Defines the CSS class name (`_[type]`) and custom JavaScript class (`app.views.[Type]Page`) that will be added/loaded on each page. Documentations sharing a similar structure (e.g. generated with the same tool or originating from the same website) should use the same `type` to avoid duplicating the CSS and JS.
Must include lowercase letters only.
* `release` [String] **(required)**
The version of the software at the time the scraper was last run. This is only informational and doesn't affect the scraper's behavior.
* `base_url` [String] **(required in `UrlScraper`)**
The documents' location. Only URLs _inside_ the `base_url` will be scraped. "inside" more or less means "starting with" except that `/docs` is outside `/doc` (but `/doc/` is inside).
Defaults to `localhost` in `FileScraper`. _(Note: any iframe, image, or skipped link pointing to localhost will be removed by the `CleanLocalUrls` filter; the value should be overridden if the documents are available online.)_
Unless `root_path` is set, the root/initial URL is equal to `base_url`.
* `root_path` [String] **(inherited)**
The path from the `base_url` of the root URL.
* `initial_paths` [Array] **(inherited)**
A list of paths (from the `base_url`) to add to the initial queue. Useful for scraping isolated documents.
Defaults to `[]`. _(Note: the `root_path` is added to the array at runtime.)_
* `dir` [String] **(required, `FileScraper` only)**
The absolute path where the files are located on the local filesystem.
_Note: `FileScraper` works exactly like `UrlScraper` (manipulating the same kind of URLs) except that it substitutes `base_url` with `dir` in order to read files instead of making HTTP requests._
* `params` [Hash] **(inherited, `UrlScraper` only)**
Query string parameters to append to every URL. (e.g. `{ format: 'raw' }``?format=raw`)
Defaults to `{}`.
* `abstract` [Boolean]
Make the scraper abstract / not runnable. Used for sharing behavior with other scraper classes (e.g. all MDN scrapers inherit from the abstract [`Mdn`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/scrapers/mdn/mdn.rb) class).
Defaults to `false`.
### Filter stacks
Each scraper has two [filter](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/filter.rb) [stacks](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/filter_stack.rb): `html_filters` and `text_filters`. They are combined into a pipeline (using the [HTML::Pipeline](https://github.com/jch/html-pipeline) library) which causes each filter to hand its output to the next filter's input.
HTML filters are executed first and manipulate a parsed version of the document (a [Nokogiri](http://nokogiri.org/Nokogiri/XML/Node.html) node object), whereas text filters manipulate the document as a string. This separation avoids parsing the document multiple times.
Filter stacks are like sorted sets. They can modified using the following methods:
```ruby
push(*names) # append one or more filters at the end
insert_before(index, *names) # insert one filter before another (index can be a name)
insert_after(index, *names) # insert one filter after another (index can be a name)
replace(index, name) # replace one filter with another (index can be a name)
```
"names" are `require` paths relative to `Docs` (e.g. `jquery/clean_html``Docs::Jquery::CleanHtml`).
Default `html_filters`:
* [`ContainerFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/container.rb) — changes the root node of the document (remove everything outside)
* [`CleanHtmlFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_html.rb) — removes HTML comments, `<script>`, `<style>`, etc.
* [`NormalizeUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/normalize_urls.rb) — replaces all URLs with their fully qualified counterpart
* [`InternalUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/internal_urls.rb) — detects internal URLs (the ones to scrape) and replaces them with their unqualified, relative counterpart
* [`NormalizePathsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/normalize_paths.rb) — makes the internal paths consistent (e.g. always end with `.html`)
* [`CleanLocalUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_local_urls.rb) — removes links, iframes and images pointing to localhost (`FileScraper` only)
Default `text_filters`:
* [`InnerHtmlFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/inner_html.rb) — converts the document to a string
* [`CleanTextFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_text.rb) — removes empty nodes
* [`AttributionFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/attribution.rb) — appends the license info and link to the original document
Additionally:
* [`TitleFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/title.rb) is a core HTML filter, disabled by default, which prepends the document with a title (`<h1>`).
* [`EntriesFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/entries.rb) is an abstract HTML filter that each scraper must implement and responsible for extracting the page's metadata.
### Filter options
The filter options are stored in the `options` Hash. The Hash is inheritable (a recursive copy) and empty by default.
More information about how filters work is available on the [Filter Reference](./filter-reference.md) page.
* [`ContainerFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/container.rb)
- `:container` [String or Proc]
A CSS selector of the container element. Everything outside of it will be removed and become unavailable to the other filters. If more than one element match the selector, the first one inside the DOM is used. If no elements match the selector, an error is raised.
If the value is a Proc, it is called for each page with the filter instance as argument, and should return a selector or `nil`.
The default container is the `<body>` element.
_Note: links outside of the container element will not be followed by the scraper. To remove links that should be followed, use a [`CleanHtml`](./filter-reference.md#cleanhtmlfilter) filter later in the stack._
* [`NormalizeUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/normalize_urls.rb)
The following options are used to modify URLs in the pages. They are useful to remove duplicates (when the same page is accessible from multiple URLs) and fix websites that have a bunch of redirections in place (when URLs that should be scraped, aren't, because they are behind a redirection which is outside of the `base_url` — see the MDN scrapers for examples of this).
- `:replace_urls` [Hash]
Replaces all instances of a URL with another.
Format: `{ 'original_url' => 'new_url' }`
- `:replace_paths` [Hash]
Replaces all instances of a sub-path (path from the `base_url`) with another.
Format: `{ 'original_path' => 'new_path' }`
- `:fix_urls` [Proc]
Called with each URL. If the returned value is `nil`, the URL isn't modified. Otherwise the returned value is used as replacement.
_Note: before these rules are applied, all URLs are converted to their fully qualified counterpart (http://...)._
* [`InternalUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/internal_urls.rb)
Internal URLs are the ones _inside_ the scraper's `base_url` ("inside" more or less means "starting with", except that `/docs` is outside `/doc`). They will be scraped unless excluded by one of the following rules. All internal URLs are converted to relative URLs inside the pages.
- `:skip_links` [Boolean or Proc]
If `false`, does not convert or follow any internal URL (creating a single-page documentation).
If the value is a Proc, it is called for each page with the filter instance as argument.
- `:follow_links` [Proc]
Called for page with the filter instance as argument. If the returned value is `false`, does not add internal URLs to the queue.
- `:trailing_slash` [Boolean]
If `true`, adds a trailing slash to all internal URLs. If `false`, removes it.
This is another option used to remove duplicate pages.
- `:skip` [Array]
Ignores internal URLs whose sub-paths (path from the `base_url`) are in the Array (case-insensitive).
- `:skip_patterns` [Array]
Ignores internal URLs whose sub-paths match any Regexp in the Array.
- `:only` [Array]
Ignores internal URLs whose sub-paths aren't in the Array (case-insensitive) and don't match any Regexp in `:only_patterns`.
- `:only_patterns` [Array]
Ignores internal URLs whose sub-paths don't match any Regexp in the Array and aren't in `:only`.
If the scraper has a `root_path`, the empty and `/` paths are automatically skipped.
If `:only` or `:only_patterns` is set, the root path is automatically added to `:only`.
_Note: pages can be excluded from the index based on their content using the [`Entries`](./filter-reference.md#entriesfilter) filter. However, their URLs will still be converted to relative in the other pages and trying to open them will return a 404 error. Although not ideal, this is often better than having to maintain a long list of `:skip` URLs._
* [`AttributionFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/attribution.rb)
- `:attribution` [String] **(required)**
An HTML string with the copyright and license information. See the other scrapers for examples.
* [`TitleFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/title.rb)
- `:title` [String or Boolean or Proc]
Unless the value is `false`, adds a title to every page.
If the value is `nil`, the title is the name of the page as determined by the [`Entries`](./filter-reference.md#entriesfilter) filter. Otherwise the title is the String or the value returned by the Proc (called for each page, with the filter instance as argument). If the Proc returns `nil` or `false`, no title is added.
- `:root_title` [String or Boolean]
Overrides the `:title` option for the root page only.
_Note: this filter is disabled by default._

@ -12,7 +12,7 @@ class App < Sinatra::Application
Rack::Mime::MIME_TYPES['.webapp'] = 'application/x-web-app-manifest+json'
configure do
use Rack::SslEnforcer, only_environments: ['production', 'test'], hsts: false, force_secure_cookies: false
use Rack::SslEnforcer, only_environments: ['production', 'test'], hsts: true, force_secure_cookies: false
set :sentry_dsn, ENV['SENTRY_DSN']
set :protection, except: [:frame_options, :xss_header]
@ -74,7 +74,7 @@ class App < Sinatra::Application
set :static, false
set :cdn_origin, 'https://cdn.devdocs.io'
set :docs_origin, '//docs.devdocs.io'
set :csp, "default-src 'self' *; script-src 'self' 'nonce-devdocs' http://cdn.devdocs.io https://cdn.devdocs.io https://www.google-analytics.com https://secure.gaug.es http://*.jquery.com https://*.jquery.com; font-src 'none'; style-src 'self' 'unsafe-inline' *; img-src 'self' * data:;"
set :csp, "default-src 'self' *; script-src 'self' 'nonce-devdocs' https://cdn.devdocs.io https://www.google-analytics.com https://secure.gaug.es https://*.jquery.com; font-src 'none'; style-src 'self' 'unsafe-inline' *; img-src 'self' * data:;"
use Rack::ConditionalGet
use Rack::ETag

@ -1,5 +1,5 @@
require 'bundler/setup'
Bundler.require :docs
Bundler.require :default, :docs
require 'active_support'
require 'active_support/core_ext'
@ -29,6 +29,7 @@ module Docs
self.rescue_errors = false
class DocNotFound < NameError; end
class SetupError < StandardError; end
def self.all
Dir["#{root_path}/docs/scrapers/**/*.rb"].
@ -74,6 +75,22 @@ module Docs
end
end
def self.find_by_slug(slug, version = nil)
doc = all.find { |klass| klass.slug == slug }
unless doc
raise DocNotFound.new(%(could not find doc with "#{slug}"), slug)
end
if version.present?
version = doc.versions.find { |klass| klass.version == version || klass.version_slug == version }
raise DocNotFound.new(%(could not find version "#{version}" for doc "#{doc.name}"), doc.name) unless version
doc = version
end
doc
end
def self.generate_page(name, version, page_id)
find(name, version).store_page(store, page_id)
end

@ -95,6 +95,9 @@ module Docs
false
end
end
rescue Docs::SetupError => error
puts "ERROR: #{error.message}"
false
end
def store_pages(store)
@ -118,6 +121,9 @@ module Docs
false
end
end
rescue Docs::SetupError => error
puts "ERROR: #{error.message}"
false
end
private

@ -1,5 +1,7 @@
module Docs
class Requester < Typhoeus::Hydra
include Instrumentable
attr_reader :request_options
def self.run(urls, options = {}, &block)
@ -20,7 +22,7 @@ module Docs
def initialize(options = {})
@request_options = options.extract!(:request_options)[:request_options].try(:dup) || {}
options[:max_concurrency] ||= 20
options[:pipelining] = false
options[:pipelining] = 0
super
end
@ -52,9 +54,11 @@ module Docs
end
def handle_response(response)
on_response.each do |callback|
result = callback.call(response)
result.each { |url| request(url) } if result.is_a? Array
instrument 'handle_response.requester', url: response.url do
on_response.each do |callback|
result = callback.call(response)
result.each { |url| request(url) } if result.is_a?(Array)
end
end
end
end

@ -1,14 +1,13 @@
module Docs
class FileScraper < Scraper
SOURCE_DIRECTORY = File.expand_path '../../../../../docs', __FILE__
Response = Struct.new :body, :url
class << self
attr_accessor :dir
def inherited(subclass)
super
subclass.base_url = base_url
subclass.dir = dir
end
end
@ -16,13 +15,25 @@ module Docs
html_filters.push 'clean_local_urls'
def source_directory
@source_directory ||= File.join(SOURCE_DIRECTORY, self.class.path)
end
private
def assert_source_directory_exists
unless Dir.exists?(source_directory)
raise SetupError, "The #{self.class.name} scraper requires the original documentation files to be stored in the \"#{source_directory}\" directory."
end
end
def request_one(url)
Response.new read_file(file_path_for(url)), URL.parse(url)
assert_source_directory_exists
Response.new read_file(url_to_path(url)), URL.parse(url)
end
def request_all(urls)
assert_source_directory_exists
queue = [urls].flatten
until queue.empty?
result = yield request_one(queue.shift)
@ -34,12 +45,12 @@ module Docs
response.body.present?
end
def file_path_for(url)
File.join self.class.dir, url.remove(base_url.to_s)
def url_to_path(url)
url.remove(base_url.to_s)
end
def read_file(path)
File.read(path)
File.read(File.join(source_directory, path))
rescue
instrument 'warn.doc', msg: "Failed to open file: #{path}"
nil

@ -72,7 +72,7 @@ module Docs
if base == dest
''
elsif dest.start_with? File.join(base, '')
elsif dest.start_with?(::File.join(base, ''))
url.path[(path.length)..-1]
end
end

@ -20,8 +20,8 @@ module Docs
def additional_entries
return [] unless subpath.start_with?('helpers') && subpath != 'helpers/'
css('h2').each_with_object [] do |node, entries|
next if node['id'] == 'access-from-helpers'
css('h2, h3').each_with_object [] do |node, entries|
next if node['id'] == 'access-from-helpers' || node.content !~ /\s*[a-z_]/
entries << ["#{node.content} (#{name})", node['id']]
end
end

@ -2,7 +2,7 @@ module Docs
class Cordova
class CleanHtmlFilter < Filter
def call
@doc = at_css('.page-content > div')
@doc = at_css('#page-toc-source') || at_css('.page-content > div')
at_css('h1').content = 'Apache Cordova' if root_page?

@ -31,6 +31,8 @@ module Docs
'Keywords'
elsif subpath.start_with?('experimental')
'Experimental libraries'
elsif subpath.start_with?('language/')
'Language'
elsif type = at_css('.t-navbar > div:nth-child(4) > :first-child').try(:content)
type.strip!
type.remove! ' library'

@ -6,13 +6,16 @@ module Docs
'CSS_Background_and_Borders' => 'Backgrounds & Borders',
'CSS_Columns' => 'Multi-column Layout',
'CSS_Flexible_Box_Layout' => 'Flexible Box Layout',
'CSS_Fonts' => 'Fonts',
'CSS_Grid_Layout' => 'Grid Layout',
'CSS_Images' => 'Images',
'CSS_Lists_and_Counters' => 'Lists',
'CSS_Transforms' => 'Transforms',
'Media_Queries' => 'Media Queries',
'filter-function' => 'Filter Effects',
'transform-function' => 'Transforms',
'@media' => 'Media Queries',
'overscroll' => 'Overscroll',
'text-size-adjust' => 'Miscellaneous',
'resolved_value' => 'Miscellaneous',
'touch-action' => 'Miscellaneous',
@ -42,7 +45,7 @@ module Docs
end
def get_type
if slug.include?('-webkit') || slug.include?('-moz')
if slug.include?('-webkit') || slug.include?('-moz') || slug.include?('-ms')
'Extensions'
elsif type = TYPE_BY_PATH[slug.split('/').first]
type
@ -66,19 +69,22 @@ module Docs
'Pseudo-Elements'
elsif name.start_with?(':')
'Selectors'
elsif name.start_with?('display-')
'Display'
else
'Miscellaneous'
end
end
STATUSES = {
'spec-Living' => 0,
'spec-REC' => 1,
'spec-CR' => 2,
'spec-PR' => 3,
'spec-LC' => 4,
'spec-WD' => 5,
'spec-ED' => 6
'spec-Living' => 0,
'spec-REC' => 1,
'spec-CR' => 2,
'spec-PR' => 3,
'spec-LC' => 4,
'spec-WD' => 5,
'spec-ED' => 6,
'spec-Obsolete' => 7
}
PRIORITY_STATUSES = %w(spec-REC spec-CR)
@ -86,6 +92,7 @@ module Docs
def get_spec
return unless table = at_css('#Specifications + table') || css('.standard-table').last
specs = table.css('tbody tr').to_a
# [link, span]
specs.map! { |node| [node.at_css('> td:nth-child(1) > a'), node.at_css('> td:nth-child(2) > span')] }
@ -110,8 +117,8 @@ module Docs
'shape' => [
%w(rect() Syntax) ],
'timing-function' => [
%w(cubic-bezier() The_cubic-bezier()_class_of_timing-functions),
%w(steps() The_steps()_class_of_timing-functions),
%w(cubic-bezier()),
%w(steps()),
%w(linear linear),
%w(ease ease),
%w(ease-in ease-in),

@ -30,7 +30,7 @@ module Docs
end
# Remove <div> wrapping .overheadIndicator
css('div > .overheadIndicator:first-child:last-child').each do |node|
css('div > .overheadIndicator:first-child:last-child', 'div > .blockIndicator:first-child:last-child').each do |node|
node.parent.replace(node)
end

@ -6,6 +6,7 @@ module Docs
'EXT_' => 'WebGL',
'OES_' => 'WebGL',
'WEBGL_' => 'WebGL',
'Sensor API' => 'Sensors',
'Ambient Light' => 'Ambient Light',
'Audio' => 'Audio',
'Battery Status' => 'Battery Status',
@ -22,10 +23,12 @@ module Docs
'Encrypted Media Extensions' => 'Encrypted Media',
'Fetch' => 'Fetch',
'File API' => 'File',
'Fullscreen' => 'Fullscreen',
'Geolocation' => 'Geolocation',
'Geometry' => 'Geometry',
'High Resolution Time' => 'Performance',
'Intersection' => 'Intersection Observer',
'Keyboard' => 'Keyboard',
'Media Capabilities' => 'Media',
'Media Capture' => 'Media',
'Media Session' => 'Media',
@ -35,6 +38,7 @@ module Docs
'MIDI' => 'Audio',
'Navigation Timing' => 'Performance',
'Network Information' => 'Network Information',
'Orientation Sensor' => 'Sensors',
'Payment' => 'Payments',
'Performance Timeline' => 'Performance',
'Pointer Events' => 'Pointer Events',
@ -53,11 +57,13 @@ module Docs
'Web App Manifest' => 'Web App Manifest',
'Budget' => 'Budget',
'Web Authentication' => 'Authentication',
'Web Locks' => 'Locks',
'Web Workers' => 'Web Workers',
'WebGL' => 'WebGL',
'WebRTC' => 'WebRTC',
'WebUSB' => 'WebUSB',
'WebVR' => 'WebVR' }
'WebVR' => 'WebVR',
'WebVTT' => 'WebVTT' }
TYPE_BY_NAME_STARTS_WITH = {
'AbortController' => 'Fetch',
@ -89,10 +95,12 @@ module Docs
'Fetch' => 'Fetch',
'File' => 'File',
'GlobalEventHandlers' => 'GlobalEventHandlers',
'HMDVR' => 'WebVR',
'history' => 'History',
'HTML Drag' => 'Drag & Drop',
'HTML' => 'Elements',
'IDB' => 'IndexedDB',
'Keyboard' => 'Keyboard',
'location' => 'Location',
'navigator' => 'Navigator',
'MediaKeySession' => 'Encrypted Media',
@ -122,6 +130,7 @@ module Docs
'StyleSheet' => 'CSS',
'Stylesheet' => 'CSS',
'SVG' => 'SVG',
'TextTrack' => 'WebVTT',
'TimeRanges' => 'Media',
'timing' => 'Performance',
'Timing' => 'Performance',
@ -160,6 +169,7 @@ module Docs
'timing' => 'Performance',
'Timing' => 'Performance',
'udio' => 'Audio',
'VRDevice' => 'WebVR',
'WebGL' => 'WebGL',
'WEBGL' => 'WebGL',
'WebRTC' => 'WebRTC',
@ -239,7 +249,7 @@ module Docs
def include_default_entry?
return true if type == 'Console'
return true unless node = doc.at_css('.overheadIndicator')
return true unless node = doc.at_css('.overheadIndicator, .blockIndicator')
node = node.parent while node.parent != doc
return true if node.previous_element.try(:name).in?(%w(h2 h3))
content = node.content

@ -65,6 +65,10 @@ module Docs
end
end
css('pre code').each do |node|
node.before(node.children).remove
end
doc
end
end

@ -52,7 +52,7 @@ module Docs
return [] if subpath.start_with?('users_guide')
return [] if IGNORE_ENTRIES_PATHS.include?(subpath.split('/').last)
css('#synopsis > ul > li').each_with_object [] do |node, entries|
css('#synopsis > details > ul > li').each_with_object [] do |node, entries|
link = node.at_css('a')
name = node.content.strip
name.remove! %r{\A(?:module|data|newtype|class|type family m|type)\s+}
@ -75,7 +75,7 @@ module Docs
end
def include_default_entry?
subpath.start_with?('users_guide') || at_css('#synopsis > ul > li')
subpath.start_with?('users_guide') || at_css('#synopsis > details > ul > li')
end
end
end

@ -28,7 +28,7 @@ module Docs
def include_default_entry?
return false if %w(Element/Heading_Elements).include?(slug)
(node = doc.at_css '.overheadIndicator').nil? || node.content.exclude?('not on a standards track')
(node = doc.at_css '.overheadIndicator, .blockIndicator').nil? || node.content.exclude?('not on a standards track')
end
def additional_entries

@ -11,12 +11,12 @@ module Docs
def other
# Remove "style" attribute
css('.inheritsbox', '.overheadIndicator').each do |node|
css('.inheritsbox', '.overheadIndicator', '.blockIndicator').each do |node|
node.remove_attribute 'style'
end
# Remove <div> wrapping .overheadIndicator
css('div > .overheadIndicator:first-child:last-child').each do |node|
css('div > .overheadIndicator:first-child:last-child', 'div > .blockIndicator:first-child:last-child').each do |node|
node.parent.replace(node)
end
end

@ -88,7 +88,7 @@ module Docs
end
def include_default_entry?
node = doc.at_css '.overheadIndicator, .warning'
node = doc.at_css '.blockIndicator, .warning'
# Can't use :first-child because #doc is a DocumentFragment
return true unless node && node.parent == doc && !node.previous_element

@ -14,6 +14,8 @@ module Docs
'#Quick_Links',
'hr']
BROWSER_UNNECESSARY_CLASS_REGEX = /\s*bc-browser[\w_-]+/
def call
css(*REMOVE_NODES).remove
@ -61,6 +63,94 @@ module Docs
node.previous_element << node
end
# New compatibility tables
css('.bc-data #Legend + dl', '.bc-data #Legend', '.bc-data #Legend_2 + dl', '.bc-data #Legend_2', '.bc-browser-name').remove
css('abbr.only-icon[title="Full support"]',
'abbr.only-icon[title="Partial support"]',
'abbr.only-icon[title="No support"]',
'abbr.only-icon[title="See implementation notes"]').remove
css('.bc-data .ic-altname', '.bc-data .ic-deprecated', '.bc-data .ic-non-standard', '.bc-data .ic-experimental').each do |node|
node.parent.remove
end
css('abbr.only-icon').each do |node|
node.replace(node.content)
end
css('.bc-table .bc-platforms td', '.bc-table .bc-browsers td').each do |node|
node.name = 'th'
end
css('.bc-data').each do |node|
link = node.at_css('.bc-github-link')
prev = node.previous_element
prev = prev.previous_element until prev.name == 'h2'
prev.add_child(link)
node.before(node.children).remove
end
css('.bc-table').each do |node|
desktop_table = node
mobile_table = node.dup
desktop_table.after(mobile_table)
if desktop_table.at_css('.bc-platform-server')
server_table = node.dup
mobile_table.after(server_table)
end
desktop_columns = desktop_table.at_css('th.bc-platform-desktop')['colspan'].to_i
mobile_columns = desktop_table.at_css('th.bc-platform-mobile')['colspan'].to_i
desktop_table.css('.bc-platform-mobile').remove
desktop_table.css('.bc-platform-server').remove
desktop_table.css('.bc-browsers th').to_a[(desktop_columns + 1)..-1].each(&:remove)
desktop_table.css('tr:not(.bc-platforms):not(.bc-browsers)').each do |line|
line.css('td').to_a[(desktop_columns)..-1].each(&:remove)
end
mobile_table.css('.bc-platform-desktop').remove
mobile_table.css('.bc-platform-server').remove
mobile_table.css('.bc-browsers th').to_a[1..(desktop_columns)].each(&:remove)
mobile_table.css('.bc-browsers th').to_a[(mobile_columns + 1)..-1].each(&:remove)
mobile_table.css('tr:not(.bc-platforms):not(.bc-browsers)').each do |line|
line.css('td').to_a[0..(desktop_columns - 1)].each(&:remove)
line.css('td').to_a[(mobile_columns)..-1].each(&:remove)
end
if server_table
server_table.css('.bc-platform-desktop').remove
server_table.css('.bc-platform-mobile').remove
server_table.css('.bc-browsers th').to_a[1..(desktop_columns + mobile_columns)].each(&:remove)
server_table.css('tr:not(.bc-platforms):not(.bc-browsers)').each do |line|
line.css('td').to_a[0..(desktop_columns + mobile_columns - 1)].each(&:remove)
end
end
end
# Reduce page size to make the offline bundle smaller.
css('.bc-supports-unknown').remove_attr('class')
css('td[class*="bc-platform"], th[class*="bc-platform"]').remove_attr('class')
css('td[class*="bc-browser"], th[class*="bc-browser"]').each do |node|
class_name = node['class']
class_name.remove!(BROWSER_UNNECESSARY_CLASS_REGEX)
if class_name.present?
node['class'] = class_name
else
node.remove_attribute('class')
end
end
css('abbr[title*="Compatibility unknown"]').each do |node|
node.before(node.children).remove
end
doc
end
end

@ -16,7 +16,7 @@ module Docs
css('pre').each do |node|
if lang = node.at_css('code')['class']
node['data-language'] = lang.remove('lang-')
node['data-language'] = lang.remove(%r{lang(uage)?-})
end
node.content = node.content

@ -26,6 +26,7 @@ module Docs
def get_type
type = at_css('h1').content.strip
type.remove! %r{\[.*\]}
REPLACE_TYPES[type] || "#{type.first.upcase}#{type[1..-1]}"
end
@ -40,6 +41,7 @@ module Docs
klass = nil if node.name == 'h2'
name = node.content.strip
name.remove! %r{\s*\[src\]}
# Skip constructors
if name.start_with? 'new '

@ -37,8 +37,10 @@ module Docs
end
# Add class to differentiate Ruby code from C code
css('.method-source-code > pre').each do |node|
node['class'] = node.at_css('.ruby-keyword') ? 'ruby' : 'c'
css('.method-source-code').each do |node|
node.parent.prepend_child(node)
pre = node.at_css('pre')
pre['class'] = pre.at_css('.ruby-keyword') ? 'ruby' : 'c'
end
# Remove code highlighting

@ -6,9 +6,11 @@ module Docs
end
def get_type
link = at_css("nav a[href='#{result[:path].split('/').last}']")
link = css("nav a[href='#{result[:path].split('/').last}']").last
return 'Miscellaneous' unless link
link.ancestors('ul').last.previous_element.content
type = link.ancestors('ul').last.previous_element.content
type.remove! %r{\s*\(.*\)}
type
end
def additional_entries
@ -25,6 +27,8 @@ module Docs
'Reference: Component'
elsif slug == 'react-api'
'Reference: React'
elsif slug == 'hooks-reference'
'Hooks'
else
'Reference'
end

@ -21,7 +21,7 @@ module Docs
when 'PFADD' then 'HyperLogLog'
when 'CLUSTER ADDSLOTS' then 'Cluster'
when 'GEOADD' then 'Geo'
when 'XADD' then 'Stream'
when 'XACK' then 'Stream'
else 'Miscellaneous'
end
end

@ -1,3 +1,5 @@
# frozen_string_literal: true
module Docs
class Rust
class CleanHtmlFilter < Filter
@ -74,10 +76,10 @@ module Docs
node.remove
end
css('h2 .important-traits', 'h3 .important-traits', 'h4 .important-traits').each do |node|
css('.important-traits').to_a.each_with_index do |node, index|
content = node.at_css('.content.hidden .content')
node.at_css('.content.hidden').replace(content) if content
node.parent.after(node)
node.parent.after(node) if node.parent.name.in?(%(h2 h3 h4))
end
css('code.content').each do |node|

@ -43,13 +43,12 @@ module Docs
end
else
css('.method')
.select {|node| !node.at_css('.fnname').nil?}
.map {|node|
name = node.at_css('.fnname').content
.each_with_object({}) { |node, entries|
name = node.at_css('.fnname').try(:content)
next unless name
name.prepend "#{self.name}::"
[name, node['id']]
}
.uniq {|item| item[0]}
entries[name] ||= [name, node['id']]
}.values
end
end
end

@ -34,7 +34,8 @@ module Docs
'rtree' => 'R*Tree Module',
'rbu' => 'RBU Extension',
'limits' => 'Limits',
'howtocorrupt' => 'How To Corrupt'
'howtocorrupt' => 'How To Corrupt',
'geopoly' => 'Geopoly'
}
def get_type

@ -15,8 +15,28 @@ module Docs
Licensed under the GNU General Public License version 3.
HTML
version '2.7' do
self.release = '2.7.1'
self.base_url = 'https://docs.ansible.com/ansible/2.7/'
options[:skip] = %w(
installation_guide/index.html
reference_appendices/glossary.html
reference_appendices/faq.html
reference_appendices/tower.html
user_guide/quickstart.html
modules/modules_by_category.html
modules/list_of_all_modules.html)
options[:skip_patterns] = [
/\Acommunity.*/i,
/\Adev_guide.*/i,
/\Aroadmap.*/i,
]
end
version '2.6' do
self.release = '2.6.1'
self.release = '2.6.7'
self.base_url = 'https://docs.ansible.com/ansible/2.6/'
options[:skip] = %w(

@ -3,7 +3,7 @@ module Docs
self.name = 'Apache HTTP Server'
self.slug = 'apache_http_server'
self.type = 'apache'
self.release = '2.4.34'
self.release = '2.4.37'
self.base_url = 'https://httpd.apache.org/docs/2.4/en/'
self.links = {
home: 'https://httpd.apache.org/'

@ -1,7 +1,7 @@
module Docs
class Async < UrlScraper
self.type = 'async'
self.release = '2.6.0'
self.release = '2.6.1'
self.base_url = 'https://caolan.github.io/async/'
self.root_path = 'docs.html'
self.links = {
@ -14,7 +14,7 @@ module Docs
options[:skip_links] = true
options[:attribution] = <<-HTML
&copy; 2010&ndash;2017 Caolan McMahon<br>
&copy; 2010&ndash;2018 Caolan McMahon<br>
Licensed under the MIT License.
HTML
end

@ -1,7 +1,6 @@
module Docs
class C < FileScraper
self.type = 'c'
self.dir = '/Users/Thibaut/DevDocs/Docs/c'
self.base_url = 'http://en.cppreference.com/w/c/'
self.root_path = 'header.html'

@ -2,7 +2,7 @@ module Docs
class Codeception < UrlScraper
self.name = 'Codeception'
self.type = 'codeception'
self.release = '2.4.0'
self.release = '2.5.1'
self.base_url = 'https://codeception.com/docs/'
self.root_path = 'index.html'
self.links = {

@ -3,7 +3,7 @@ module Docs
self.name = 'CodeceptJS'
self.type = 'simple'
self.root_path = 'index.html'
self.release = '1.3.1'
self.release = '1.4.4'
self.base_url = 'https://codecept.io/'
self.links = {
home: 'https://codecept.io/',

@ -24,10 +24,15 @@ module Docs
end
options[:attribution] = <<-HTML
&copy; 2012&ndash;2017 The Apache Software Foundation<br>
&copy; 2012&ndash;2018 The Apache Software Foundation<br>
Licensed under the Apache License 2.0.
HTML
version '8' do
self.release = '8.1.2'
self.base_url = 'https://cordova.apache.org/docs/en/8.x/'
end
version '7' do
self.release = '7.1.0'
self.base_url = 'https://cordova.apache.org/docs/en/7.x/'

@ -3,7 +3,6 @@ module Docs
self.name = 'C++'
self.slug = 'cpp'
self.type = 'c'
self.dir = '/Users/Thibaut/DevDocs/Docs/cpp'
self.base_url = 'http://en.cppreference.com/w/cpp/'
self.root_path = 'header.html'

@ -1,7 +1,7 @@
module Docs
class Crystal < UrlScraper
self.type = 'crystal'
self.release = '0.26.0'
self.release = '0.27.0'
self.base_url = 'https://crystal-lang.org/'
self.root_path = "api/#{release}/index.html"
self.initial_paths = %w(docs/index.html)

@ -2,7 +2,7 @@ module Docs
class D < UrlScraper
include MultipleBaseUrls
self.release = '2.081.0'
self.release = '2.083.0'
self.type = 'd'
self.base_urls = ['https://dlang.org/phobos/', 'https://dlang.org/spec/']
self.root_path = 'index.html'

@ -17,7 +17,7 @@ module Docs
HTML
version '5' do
self.release = '5.4.0'
self.release = '5.7.0'
self.base_url = 'https://github.com/d3/'
self.root_path = 'd3/blob/master/API.md'

@ -24,13 +24,11 @@ module Docs
version '2' do
self.release = '2.0.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/Dart2'
self.base_url = "https://api.dartlang.org/stable/#{release}/"
end
version '1' do
self.release = '1.24.3'
self.dir = '/Users/Thibaut/DevDocs/Docs/Dart1'
self.base_url = "https://api.dartlang.org/stable/#{release}/"
end
end

@ -36,37 +36,31 @@ module Docs
version '2.1' do
self.release = '2.1.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/Django21'
self.base_url = 'https://docs.djangoproject.com/en/2.1/'
end
version '2.0' do
self.release = '2.0.7'
self.dir = '/Users/Thibaut/DevDocs/Docs/Django20'
self.base_url = 'https://docs.djangoproject.com/en/2.0/'
end
version '1.11' do
self.release = '1.11.9'
self.dir = '/Users/Thibaut/DevDocs/Docs/Django111'
self.base_url = 'https://docs.djangoproject.com/en/1.11/'
end
version '1.10' do
self.release = '1.10.8'
self.dir = '/Users/Thibaut/DevDocs/Docs/Django110'
self.base_url = 'https://docs.djangoproject.com/en/1.10/'
end
version '1.9' do
self.release = '1.9.13'
self.dir = '/Users/Thibaut/DevDocs/Docs/Django19'
self.base_url = 'https://docs.djangoproject.com/en/1.9/'
end
version '1.8' do
self.release = '1.8.18'
self.dir = '/Users/Thibaut/DevDocs/Docs/Django18'
self.base_url = 'https://docs.djangoproject.com/en/1.8/'
end
end

@ -42,22 +42,18 @@ module Docs
version '21' do
self.release = '21.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/Erlang21'
end
version '20' do
self.release = '20.3'
self.dir = '/Users/Thibaut/DevDocs/Docs/Erlang20'
end
version '19' do
self.release = '19.3'
self.dir = '/Users/Thibaut/DevDocs/Docs/Erlang19'
end
version '18' do
self.release = '18.3'
self.dir = '/Users/Thibaut/DevDocs/Docs/Erlang18'
end
end
end

@ -2,7 +2,7 @@ module Docs
class Eslint < UrlScraper
self.name = 'ESLint'
self.type = 'simple'
self.release = '4.19.0'
self.release = '5.8.0'
self.base_url = 'https://eslint.org/docs/'
self.root_path = 'user-guide/getting-started'
self.links = {

@ -1,7 +1,7 @@
module Docs
class Flow < UrlScraper
self.type = 'simple'
self.release = '0.79.1'
self.release = '0.85.0'
self.base_url = 'https://flow.org/en/docs/'
self.links = {
home: 'https://flow.org/',

@ -1,7 +1,7 @@
module Docs
class Git < UrlScraper
self.type = 'git'
self.release = '2.17.0'
self.release = '2.19.1'
self.base_url = 'https://git-scm.com/docs'
self.initial_paths = %w(/git.html)
self.links = {

@ -48,13 +48,11 @@ module Docs
version '7' do
self.release = '7.3.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/gcc7'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/gcc/"
end
version '7 CPP' do
self.release = '7.3.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/gcpp7'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/cpp/"
options[:replace_paths] = CPP_PATHS
@ -62,7 +60,6 @@ module Docs
version '6' do
self.release = '6.4.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/gcc6'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/gcc/"
options[:root_title] = 'Using the GNU Compiler Collection (GCC)'
@ -70,7 +67,6 @@ module Docs
version '6 CPP' do
self.release = '6.4.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/gcpp6'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/cpp/"
options[:replace_paths] = CPP_PATHS
@ -78,7 +74,6 @@ module Docs
version '5' do
self.release = '5.4.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/gcc5'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/gcc/"
options[:root_title] = 'Using the GNU Compiler Collection (GCC)'
@ -86,7 +81,6 @@ module Docs
version '5 CPP' do
self.release = '5.4.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/gcpp5'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/cpp/"
options[:replace_paths] = CPP_PATHS
@ -94,7 +88,6 @@ module Docs
version '4' do
self.release = '4.9.3'
self.dir = '/Users/Thibaut/DevDocs/Docs/gcc4'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/gcc/"
options[:root_title] = 'Using the GNU Compiler Collection (GCC)'
@ -102,7 +95,6 @@ module Docs
version '4 CPP' do
self.release = '4.9.3'
self.dir = '/Users/Thibaut/DevDocs/Docs/gcpp4'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/cpp/"
options[:replace_paths] = CPP_PATHS

@ -8,25 +8,21 @@ module Docs
version '7' do
self.release = '7.3.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/gfortran7'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/gfortran/"
end
version '6' do
self.release = '6.4.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/gfortran6'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/gfortran/"
end
version '5' do
self.release = '5.4.0'
self.dir = '/Users/Thibaut/DevDocs/Docs/gfortran5'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/gfortran/"
end
version '4' do
self.release = '4.9.3'
self.dir = '/Users/Thibaut/DevDocs/Docs/gfortran4'
self.base_url = "https://gcc.gnu.org/onlinedocs/gcc-#{release}/gfortran/"
end
end

@ -1,7 +1,7 @@
module Docs
class Go < UrlScraper
self.type = 'go'
self.release = '1.10.1'
self.release = '1.11.2'
self.base_url = 'https://golang.org/pkg/'
self.links = {
home: 'https://golang.org/',

@ -29,7 +29,7 @@ module Docs
end
version '3.0' do
self.release = '3.0'
self.release = '3.0.6'
self.base_url = "http://docs.godotengine.org/en/#{self.version}/"
end

@ -1,7 +1,7 @@
module Docs
class Graphite < UrlScraper
self.type = 'graphite'
self.release = '1.1.3'
self.release = '1.1.4'
self.base_url = 'https://graphite.readthedocs.io/en/latest/'
self.links = {
code: 'https://github.com/graphite-project/graphite-web'

@ -57,7 +57,7 @@ module Docs
end
version '8' do
self.release = '8.2.1'
self.release = '8.6.1'
self.base_url = "https://downloads.haskell.org/~ghc/#{release}/docs/html/"
end

@ -2,7 +2,7 @@ module Docs
class Homebrew < UrlScraper
self.name = 'Homebrew'
self.type = 'simple'
self.release = '1.4.2'
self.release = '1.8.1'
self.base_url = 'https://docs.brew.sh/'
self.links = {
home: 'https://brew.sh',
@ -13,8 +13,7 @@ module Docs
options[:container] = ->(filter) { filter.root_page? ? '#home' : '#page' }
options[:skip_patterns] = [/maintainer/i, /core\-contributor/i]
options[:skip] = %w(Kickstarter-Supporters.html)
options[:skip_patterns] = [/maintainer/i, /core\-contributor/i, /kickstarter/i]
options[:attribution] = <<-HTML
&copy; 2009&ndash;present Homebrew contributors<br>

@ -1,7 +1,7 @@
module Docs
class Jasmine < UrlScraper
self.type = 'simple'
self.release = '3.2.1'
self.release = '3.3.0'
self.base_url = 'https://jasmine.github.io/api/3.2/'
self.root_path = 'index.html'
self.links = {

@ -8,12 +8,12 @@ module Docs
options[:only_patterns] = [/\Amanual\//, /\Astdlib\//]
options[:attribution] = <<-HTML
&copy; 2009&ndash;2016 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors<br>
&copy; 2009&ndash;2018 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors<br>
Licensed under the MIT License.
HTML
version '1.0' do
self.release = '1.0.0'
self.release = '1.0.1'
self.base_url = "https://docs.julialang.org/en/v#{release}/"
self.type = 'julia'

@ -3,7 +3,7 @@
module Docs
class Koa < Github
self.base_url = 'https://github.com/koajs/koa/blob/master/docs/'
self.release = '2.5.1'
self.release = '2.6.1'
self.root_path = 'api/index.md'
self.initial_paths = %w[

@ -20,7 +20,7 @@ module Docs
HTML
version '1.3' do
self.release = '1.3.0'
self.release = '1.3.4'
self.base_url = "https://leafletjs.com/reference-#{release}.html"
end

@ -6,7 +6,7 @@ module Docs
self.type = 'sphinx'
self.root_path = 'index.html'
self.links = {
home: 'http://matplotlib.org/',
home: 'https://matplotlib.org/',
code: 'https://github.com/matplotlib/matplotlib'
}
@ -16,34 +16,52 @@ module Docs
options[:skip] = %w(api_changes.html tutorial.html faq.html)
options[:attribution] = <<-HTML
&copy; 2012&ndash;2017 Matplotlib Development Team. All rights reserved.<br>
&copy; 2012&ndash;2018 Matplotlib Development Team. All rights reserved.<br>
Licensed under the Matplotlib License Agreement.
HTML
version '3.0' do
self.release = '3.0.0'
self.base_urls = [
"https://matplotlib.org/#{release}/api/",
"https://matplotlib.org/#{release}/mpl_toolkits/mplot3d/",
"https://matplotlib.org/#{release}/mpl_toolkits/axes_grid/api/"
]
end
version '2.2' do
self.release = '2.2.3'
self.base_urls = [
"https://matplotlib.org/#{release}/api/",
"https://matplotlib.org/#{release}/mpl_toolkits/mplot3d/",
"https://matplotlib.org/#{release}/mpl_toolkits/axes_grid/api/"
]
end
version '2.1' do
self.release = '2.1.0'
self.base_urls = [
"http://matplotlib.org/#{release}/api/",
"http://matplotlib.org/#{release}/mpl_toolkits/mplot3d/",
"http://matplotlib.org/#{release}/mpl_toolkits/axes_grid/api/"
"https://matplotlib.org/#{release}/api/",
"https://matplotlib.org/#{release}/mpl_toolkits/mplot3d/",
"https://matplotlib.org/#{release}/mpl_toolkits/axes_grid/api/"
]
end
version '2.0' do
self.release = '2.0.2'
self.base_urls = [
"http://matplotlib.org/#{release}/api/",
"http://matplotlib.org/#{release}/mpl_toolkits/mplot3d/",
"http://matplotlib.org/#{release}/mpl_toolkits/axes_grid/api/"
"https://matplotlib.org/#{release}/api/",
"https://matplotlib.org/#{release}/mpl_toolkits/mplot3d/",
"https://matplotlib.org/#{release}/mpl_toolkits/axes_grid/api/"
]
end
version '1.5' do
self.release = '1.5.3'
self.base_urls = [
"http://matplotlib.org/#{release}/api/",
"http://matplotlib.org/#{release}/mpl_toolkits/mplot3d/",
"http://matplotlib.org/#{release}/mpl_toolkits/axes_grid/api/"
"https://matplotlib.org/#{release}/api/",
"https://matplotlib.org/#{release}/mpl_toolkits/mplot3d/",
"https://matplotlib.org/#{release}/mpl_toolkits/axes_grid/api/"
]
end
end

@ -1,7 +1,7 @@
module Docs
class Mocha < UrlScraper
self.type = 'simple'
self.release = '5.0.1'
self.release = '5.2.0'
self.base_url = 'https://mochajs.org/'
self.links = {
home: 'https://mochajs.org/',

@ -2,7 +2,7 @@ module Docs
class Nginx < UrlScraper
self.name = 'nginx'
self.type = 'nginx'
self.release = '1.15.0'
self.release = '1.15.5'
self.base_url = 'https://nginx.org/en/docs/'
self.links = {
home: 'https://nginx.org/',

@ -1,7 +1,7 @@
module Docs
class Nim < UrlScraper
self.type = 'simple'
self.release = '0.18.0'
self.release = '0.19.0'
self.base_url = 'https://nim-lang.org/docs/'
self.root_path = 'overview.html'
self.links = {
@ -14,7 +14,7 @@ module Docs
options[:skip] = %w(theindex.html docgen.txt)
options[:attribution] = <<-HTML
&copy; 2006&ndash;2017 Andreas Rumpf<br>
&copy; 2006&ndash;2018 Andreas Rumpf<br>
Licensed under the MIT License.
HTML
end

@ -23,12 +23,17 @@ module Docs
HTML
version do
self.release = '10.9.0'
self.release = '11.1.0'
self.base_url = 'https://nodejs.org/dist/latest-v11.x/docs/api/'
end
version '10 LTS' do
self.release = '10.13.0'
self.base_url = 'https://nodejs.org/dist/latest-v10.x/docs/api/'
end
version '8 LTS' do
self.release = '8.11.4'
self.release = '8.12.0'
self.base_url = 'https://nodejs.org/dist/latest-v8.x/docs/api/'
end

@ -1,9 +1,13 @@
module Docs
class Nokogiri2 < Rdoc
# Instructions:
# 1. Download the latest release at https://github.com/sparklemotion/nokogiri/releases
# 2. Run "bundle install && bundle exec rake docs" (in the Nokogiri directory)
# 4. Copy the "doc" directory to "docs/nokgiri"
self.name = 'Nokogiri'
self.slug = 'nokogiri'
self.release = '1.8.1'
self.dir = '/Users/Thibaut/DevDocs/Docs/RDoc/Nokogiri'
self.release = '1.9.0'
html_filters.replace 'rdoc/entries', 'nokogiri2/entries'
@ -11,8 +15,8 @@ module Docs
options[:only_patterns] = [/\ANokogiri/, /\AXSD/]
options[:attribution] = <<-HTML
&copy; 2008&ndash;2017 Aaron Patterson, Mike Dalessio, Charles Nutter, Sergio Arbeo<br>
Patrick Mahoney, Yoko Harada, Akinori Musha, John Shahid<br>
&copy; 2008&ndash;2018 Aaron Patterson, Mike Dalessio, Charles Nutter, Sergio Arbeo,<br>
Patrick Mahoney, Yoko Harada, Akinori Musha, John Shahid, Lars Kanis<br>
Licensed under the MIT License.
HTML
end

@ -2,7 +2,6 @@ module Docs
class Numpy < FileScraper
self.name = 'NumPy'
self.type = 'sphinx'
self.dir = '/Users/Thibaut/DevDocs/Docs/numpy/reference/'
self.root_path = 'index.html'
self.links = {
home: 'http://www.numpy.org/',

@ -1,11 +1,10 @@
module Docs
class Openjdk < FileScraper
# Downloaded from packages.debian.org/sid/openjdk-8-doc
# Extracting subdirectory /usr/share/doc/openjdk-8-jre-headless/api
self.name = 'OpenJDK'
self.type = 'openjdk'
self.root_path = 'overview-summary.html'
# Downloaded from packages.debian.org/sid/openjdk-8-doc
# Extracting subdirectory /usr/share/doc/openjdk-8-jre-headless/api
self.dir = '/Users/Thibaut/DevDocs/Docs/OpenJDK'
html_filters.insert_after 'internal_urls', 'openjdk/clean_urls'
html_filters.push 'openjdk/entries', 'openjdk/clean_html'

@ -20,6 +20,11 @@ module Docs
Licensed under the 3-clause BSD License.
HTML
version '0.23' do
self.release = '0.23.4'
self.base_url = "http://pandas.pydata.org/pandas-docs/version/#{self.release}/"
end
version '0.22' do
self.release = '0.22.0'
self.base_url = "http://pandas.pydata.org/pandas-docs/version/#{self.release}/"

@ -2,7 +2,6 @@ module Docs
class Perl < FileScraper
self.name = 'Perl'
self.type = 'perl'
self.dir = '/Users/Thibaut/DevDocs/Docs/Perl'
self.root_path = 'index.html'
self.links = {
home: 'https://www.perl.org/'

@ -1,7 +1,7 @@
module Docs
class Phoenix < UrlScraper
self.type = 'elixir'
self.release = '1.3.2'
self.release = '1.3.4'
self.base_url = 'https://hexdocs.pm/'
self.root_path = 'phoenix/Phoenix.html'
self.initial_paths = %w(

@ -1,5 +1,7 @@
module Docs
class Php < FileScraper
# Downloaded from php.net/download-docs.php
include FixInternalUrlsBehavior
self.name = 'PHP'
@ -23,9 +25,6 @@ module Docs
code: 'https://git.php.net/?p=php-src.git;a=summary'
}
# Downloaded from php.net/download-docs.php
self.dir = '/Users/Thibaut/DevDocs/Docs/PHP'
html_filters.push 'php/internal_urls', 'php/entries', 'php/clean_html', 'title'
text_filters.push 'php/fix_urls'

@ -1,6 +1,6 @@
module Docs
class Puppeteer < Github
self.release = '1.8.0'
self.release = '1.10.0'
self.base_url = 'https://github.com/GoogleChrome/puppeteer/blob/v1.8.0/docs/api.md'
self.links = {
code: 'https://github.com/GoogleChrome/puppeteer'

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save