iOS 10+
- If you're unable to upgrade, I apologize.
- I decided to prioritize speed and new features over support for older browsers.
+ If you're unable to upgrade, we apologize.
+ We decided to prioritize speed and new features over support for older browsers.
Note: if you're already using one of the browsers above, check your settings and add-ons.
The app uses feature detection, not user agent sniffing.
- — Thibaut @DevDocs
+ — @DevDocs
"""
diff --git a/assets/javascripts/templates/pages/about_tmpl.coffee b/assets/javascripts/templates/pages/about_tmpl.coffee
index 26ca8e7e..5d997403 100644
--- a/assets/javascripts/templates/pages/about_tmpl.coffee
+++ b/assets/javascripts/templates/pages/about_tmpl.coffee
@@ -11,22 +11,18 @@ app.templates.aboutPage = -> """
DevDocs: API Documentation Browser
- DevDocs combines multiple API documentations in a clean and organized web UI with instant search, offline support, mobile version, dark theme, keyboard shortcuts, and more.
-
+ DevDocs combines multiple developer documentations in a clean and organized web UI with instant search, offline support, mobile version, dark theme, keyboard shortcuts, and more.
+
DevDocs is free and open source. It was created by Thibaut Courouble and is operated by freeCodeCamp.
To keep up-to-date with the latest news:
Copyright and License
- Copyright 2013–2018 Thibaut Courouble and other contributors
+ Copyright 2013–2019 Thibaut Courouble and other contributors
This software is licensed under the terms of the Mozilla Public License v2.0.
You may obtain a copy of the source code at github.com/freeCodeCamp/devdocs.
For more information, see the COPYRIGHT
@@ -48,11 +44,10 @@ app.templates.aboutPage = -> """
Where can I suggest new docs and features?
You can suggest and vote for new docs on the Trello board.
If you have a specific feature request, add it to the issue tracker.
- Otherwise use the mailing list.
+ Otherwise, come talk to us in the Gitter chat room.
Where can I report bugs?
In the issue tracker. Thanks!
- For anything else, feel free to email me at thibaut@devdocs.io.
Credits
@@ -76,12 +71,12 @@ app.templates.aboutPage = -> """
Privacy Policy
- - devdocs.io ("App") is operated by Thibaut Courouble ("We").
-
- We do not collect personal information.
+
- devdocs.io ("App") is operated by freeCodeCamp ("We").
+
- We do not collect personal information through the app.
- We use Google Analytics, Gauges and Sentry to collect anonymous traffic information and improve the app.
- The app uses cookies to store user preferences.
- By using the app, you signify your acceptance of this policy. If you do not agree to this policy, please do not use the app.
-
- If you have any questions regarding privacy, please email thibaut@devdocs.io.
+
- If you have any questions regarding privacy, please email privacy@freecodecamp.org.
"""
@@ -102,7 +97,7 @@ credits = [
'https://www.apache.org/licenses/LICENSE-2.0'
], [
'Async',
- '2010-2017 Caolan McMahon',
+ '2010-2018 Caolan McMahon',
'MIT',
'https://raw.githubusercontent.com/caolan/async/master/LICENSE'
], [
@@ -192,7 +187,7 @@ credits = [
'https://raw.githubusercontent.com/jashkenas/coffeescript/master/LICENSE'
], [
'Cordova',
- '2012-2017 The Apache Software Foundation',
+ '2012-2018 The Apache Software Foundation',
'Apache',
'https://raw.githubusercontent.com/apache/cordova-docs/master/LICENSE'
], [
@@ -372,7 +367,7 @@ credits = [
'https://raw.githubusercontent.com/jquery/api.jqueryui.com/master/LICENSE.txt'
], [
'Julia',
- '2009-2016 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors',
+ '2009-2018 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors',
'MIT',
'https://raw.githubusercontent.com/JuliaLang/julia/master/LICENSE.md'
], [
@@ -442,7 +437,7 @@ credits = [
'https://daringfireball.net/projects/markdown/license'
], [
'Matplotlib',
- '2012-2017 Matplotlib Development Team. All rights reserved.',
+ '2012-2018 Matplotlib Development Team. All rights reserved.',
'Custom',
'https://raw.githubusercontent.com/matplotlib/matplotlib/master/LICENSE/LICENSE'
], [
@@ -647,7 +642,7 @@ credits = [
'http://scikit-image.org/docs/dev/license.html'
], [
'scikit-learn',
- '2007-2017 The scikit-learn developers',
+ '2007-2018 The scikit-learn developers',
'BSD',
'https://raw.githubusercontent.com/scikit-learn/scikit-learn/master/COPYING'
], [
@@ -692,9 +687,9 @@ credits = [
'https://raw.githubusercontent.com/hashicorp/terraform-website/master/LICENSE.md'
], [
'Twig',
- '2009-2017 The Twig Team',
+ '2009-2018 The Twig Team',
'BSD',
- 'https://twig.sensiolabs.org/license'
+ 'https://twig.symfony.com/license'
], [
'TypeScript',
'Microsoft and other contributors',
@@ -702,7 +697,7 @@ credits = [
'https://raw.githubusercontent.com/Microsoft/TypeScript-Handbook/master/LICENSE'
], [
'Underscore.js',
- '2009-2017 Jeremy Ashkenas, DocumentCloud and Investigative Reporters & Editors',
+ '2009-2018 Jeremy Ashkenas, DocumentCloud and Investigative Reporters & Editors',
'MIT',
'https://raw.githubusercontent.com/jashkenas/underscore/master/LICENSE'
], [
diff --git a/assets/javascripts/templates/pages/root_tmpl.coffee.erb b/assets/javascripts/templates/pages/root_tmpl.coffee.erb
index b5369403..7adce7fd 100644
--- a/assets/javascripts/templates/pages/root_tmpl.coffee.erb
+++ b/assets/javascripts/templates/pages/root_tmpl.coffee.erb
@@ -14,7 +14,7 @@ app.templates.intro = """
Run thor docs:download --installed
to update all downloaded documentations.
To be notified about new versions, don't forget to watch the repository on GitHub.
The issue tracker is the preferred channel for bug reports and
- feature requests. For everything else, use the mailing list.
+ feature requests. For everything else, use Gitter.
Contributions are welcome. See the guidelines.
DevDocs is licensed under the terms of the Mozilla Public License v2.0. For more information,
see the COPYRIGHT and
diff --git a/assets/javascripts/views/content/offline_page.coffee b/assets/javascripts/views/content/offline_page.coffee
index f37bbf8f..9f7f4cf2 100644
--- a/assets/javascripts/views/content/offline_page.coffee
+++ b/assets/javascripts/views/content/offline_page.coffee
@@ -57,6 +57,7 @@ class app.views.OfflinePage extends app.View
doc[action](@onInstallSuccess.bind(@, doc), @onInstallError.bind(@, doc), @onInstallProgress.bind(@, doc))
el.parentNode.innerHTML = "#{el.textContent.replace(/e$/, '')}ing…"
else if action = el.getAttribute('data-action-all')
+ return unless action isnt 'uninstall' or window.confirm('Uninstall all docs?')
app.db.migrate()
$.click(el) for el in @findAll("[data-action='#{action}']")
return
diff --git a/assets/javascripts/views/layout/document.coffee b/assets/javascripts/views/layout/document.coffee
index 10feefd5..02b98c7a 100644
--- a/assets/javascripts/views/layout/document.coffee
+++ b/assets/javascripts/views/layout/document.coffee
@@ -77,7 +77,7 @@ class app.views.Document extends app.View
switch target.getAttribute('data-behavior')
when 'back' then history.back()
when 'reload' then window.location.reload()
- when 'reboot' then window.location = '/'
+ when 'reboot' then app.reboot()
when 'hard-reload' then app.reload()
when 'reset' then app.reset() if confirm('Are you sure you want to reset DevDocs?')
return
diff --git a/assets/stylesheets/application.css.scss b/assets/stylesheets/application.css.scss
index 7c43dde3..85d1134f 100644
--- a/assets/stylesheets/application.css.scss
+++ b/assets/stylesheets/application.css.scss
@@ -3,7 +3,7 @@
//= depend_on sprites/docs.json
/*!
- * Copyright 2013-2018 Thibaut Courouble and other contributors
+ * Copyright 2013-2019 Thibaut Courouble and other contributors
*
* This source code is licensed under the terms of the Mozilla
* Public License, v. 2.0, a copy of which may be obtained at:
diff --git a/assets/stylesheets/components/_content.scss b/assets/stylesheets/components/_content.scss
index c2387836..c8b1863c 100644
--- a/assets/stylesheets/components/_content.scss
+++ b/assets/stylesheets/components/_content.scss
@@ -460,4 +460,5 @@
display: inline-block;
vertical-align: text-top;
margin-left: .25rem;
+ background: inherit;
}
diff --git a/assets/stylesheets/components/_fail.scss b/assets/stylesheets/components/_fail.scss
index 535100ac..c520977e 100644
--- a/assets/stylesheets/components/_fail.scss
+++ b/assets/stylesheets/components/_fail.scss
@@ -32,5 +32,3 @@
}
._fail-text:last-child { margin: 0; }
-
-._fail-link { float: right; }
diff --git a/assets/stylesheets/pages/_mdn.scss b/assets/stylesheets/pages/_mdn.scss
index d2b9b643..fb2cce38 100644
--- a/assets/stylesheets/pages/_mdn.scss
+++ b/assets/stylesheets/pages/_mdn.scss
@@ -30,6 +30,7 @@
.notice,
.warning,
.overheadIndicator,
+ .blockIndicator,
.syntaxbox, // CSS, JavaScript
.twopartsyntaxbox, // CSS
.inheritsbox, // JavaScript
@@ -104,4 +105,28 @@
.cleared { clear: both; } // CSS/box-shadow
code > strong { font-weight: normal; }
+
+ // Compatibility tablees
+
+ .bc-github-link {
+ float: right;
+ font-size: .75rem;
+ }
+
+ .bc-supports-yes, .bc-supports-yes + dd, .bc-supports-yes + dd + dd { background: var(--noteGreenBackground); }
+ .bc-supports-partial, .bc-supports-partial + dd, .bc-supports-partial + dd + dd { background: var(--noteOrangeBackground); }
+ .bc-supports-no, .bc-supports-no + dd, .bc-supports-no + dd + dd { background: var(--noteRedBackground); }
+
+ .bc-table {
+ min-width: 100%;
+
+ dl {
+ margin: .25rem 0 0;
+ padding: .25rem 0 0;
+ font-size: .75rem;
+ border-top: 1px solid var(--boxBorder);
+ }
+
+ dd { margin: 0; }
+ }
}
diff --git a/assets/stylesheets/pages/_node.scss b/assets/stylesheets/pages/_node.scss
index 23560151..5d0de4bb 100644
--- a/assets/stylesheets/pages/_node.scss
+++ b/assets/stylesheets/pages/_node.scss
@@ -20,5 +20,8 @@
margin: 0 0 1em 1em;
@extend %label;
}
+
+ .srclink { float: right; }
+ details > table { margin: 0; }
}
diff --git a/assets/stylesheets/pages/_rdoc.scss b/assets/stylesheets/pages/_rdoc.scss
index 7873900a..6622e68e 100644
--- a/assets/stylesheets/pages/_rdoc.scss
+++ b/assets/stylesheets/pages/_rdoc.scss
@@ -33,19 +33,8 @@
}
}
- .method-description { position: relative; }
-
.method-source-code {
display: none;
- position: absolute;
- z-index: 1;
- top: 0;
- left: -1em;
- right: 0;
- background: var(--contentBackground);
- box-shadow: 0 1em 1em 1em var(--contentBackground);
-
- > pre { margin: 0; }
}
// Rails guides
diff --git a/docs/adding-docs.md b/docs/adding-docs.md
new file mode 100644
index 00000000..dfc96cb1
--- /dev/null
+++ b/docs/adding-docs.md
@@ -0,0 +1,23 @@
+Adding a documentation may look like a daunting task but once you get the hang of it, it's actually quite simple. Don't hesitate to ask for help [in Gitter](https://gitter.im/FreeCodeCamp/DevDocs) if you ever get stuck.
+
+**Note:** please read the [contributing guidelines](../.github/CONTRIBUTING.md) before submitting a new documentation.
+
+1. Create a subclass of `Docs::UrlScraper` or `Docs::FileScraper` in the `lib/docs/scrapers/` directory. Its name should be the [PascalCase](http://api.rubyonrails.org/classes/String.html#method-i-camelize) equivalent of the filename (e.g. `my_doc` → `MyDoc`)
+2. Add the appropriate class attributes and filter options (see the [Scraper Reference](./scraper-reference.md) page).
+3. Check that the scraper is listed in `thor docs:list`.
+4. Create filters specific to the scraper in the `lib/docs/filters/[my_doc]/` directory and add them to the class's [filter stacks](./scraper-reference.md#filter-stacks). You may create any number of filters but will need at least the following two:
+ * A [`CleanHtml`](./filter-reference.md#cleanhtmlfilter) filter whose task is to clean the HTML markup (e.g. adding `id` attributes to headings) and remove everything superfluous and/or nonessential.
+ * An [`Entries`](./filter-reference.md#entriesfilter) filter whose task is to determine the pages' metadata (the list of entries, each with a name, type and path).
+ The [Filter Reference](./filter-reference.md) page has all the details about filters.
+5. Using the `thor docs:page [my_doc] [path]` command, check that the scraper works properly. Files will appear in the `public/docs/[my_doc]/` directory (but not inside the app as the command doesn't touch the index). `path` in this case refers to either the remote path (if using `UrlScraper`) or the local path (if using `FileScraper`).
+6. Generate the full documentation using the `thor docs:generate [my_doc] --force` command. Additionally, you can use the `--verbose` option to see which files are being created/updated/deleted (useful to see what changed since the last run), and the `--debug` option to see which URLs are being requested and added to the queue (useful to pin down which page adds unwanted URLs to the queue).
+7. Start the server, open the app, enable the documentation, and see how everything plays out.
+8. Tweak the scraper/filters and repeat 5) and 6) until the pages and metadata are ok.
+9. To customize the pages' styling, create an SCSS file in the `assets/stylesheets/pages/` directory and import it in both `application.css.scss` AND `application-dark.css.scss`. Both the file and CSS class should be named `_[type]` where [type] is equal to the scraper's `type` attribute (documentations with the same type share the same custom CSS and JS). Setting the type to `simple` will apply the general styling rules in `assets/stylesheets/pages/_simple.scss`, which can be used for documentations where little to no CSS changes are needed.
+10. To add syntax highlighting or execute custom JavaScript on the pages, create a file in the `assets/javascripts/views/pages/` directory (take a look at the other files to see how it works).
+11. Add the documentation's icon in the `public/icons/docs/[my_doc]/` directory, in both 16x16 and 32x32-pixels formats. It'll be added to the icon spritesheet after your pull request is merged.
+12. Add the documentation's copyright details to the list in `assets/javascripts/templates/pages/about_tmpl.coffee`. This is the data shown in the table on the [about](https://devdocs.io/about) page, and is ordered alphabetically. Simply copying an existing item, placing it in the right slot and updating the values to match the new scraper will do the job.
+
+If the documentation includes more than a few hundreds pages and is available for download, try to scrape it locally (e.g. using `FileScraper`). It'll make the development process much faster and avoids putting too much load on the source site. (It's not a problem if your scraper is coupled to your local setup, just explain how it works in your pull request.)
+
+Finally, try to document your scraper and filters' behavior as much as possible using comments (e.g. why some URLs are ignored, HTML markup removed, metadata that way, etc.). It'll make updating the documentation much easier.
diff --git a/docs/filter-reference.md b/docs/filter-reference.md
new file mode 100644
index 00000000..f5c74c66
--- /dev/null
+++ b/docs/filter-reference.md
@@ -0,0 +1,224 @@
+**Table of contents:**
+
+* [Overview](#overview)
+* [Instance methods](#instance-methods)
+* [Core filters](#core-filters)
+* [Custom filters](#custom-filters)
+ - [CleanHtmlFilter](#cleanhtmlfilter)
+ - [EntriesFilter](#entriesfilter)
+
+## Overview
+
+Filters use the [HTML::Pipeline](https://github.com/jch/html-pipeline) library. They take an HTML string or [Nokogiri](http://nokogiri.org/) node as input, optionally perform modifications and/or extract information from it, and then outputs the result. Together they form a pipeline where each filter hands its output to the next filter's input. Every documentation page passes through this pipeline before being copied on the local filesystem.
+
+Filters are subclasses of the [`Docs::Filter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/filter.rb) class and require a `call` method. A basic implementation looks like this:
+
+```ruby
+module Docs
+ class CustomFilter < Filter
+ def call
+ doc
+ end
+ end
+end
+```
+
+Filters which manipulate the Nokogiri node object (`doc` and related methods) are _HTML filters_ and must not manipulate the HTML string (`html`). Vice-versa, filters which manipulate the string representation of the document are _text filters_ and must not manipulate the Nokogiri node object. The two types are divided into two stacks within the scrapers. These stacks are then combined into a pipeline that calls the HTML filters before the text filters (more details [here](./scraper-reference.md#filter-stacks)). This is to avoid parsing the document multiple times.
+
+The `call` method must return either `doc` or `html`, depending on the type of filter.
+
+## Instance methods
+
+* `doc` [Nokogiri::XML::Node]
+ The Nokogiri representation of the container element.
+ See [Nokogiri's API docs](http://www.rubydoc.info/github/sparklemotion/nokogiri/Nokogiri/XML/Node) for the list of available methods.
+
+* `html` [String]
+ The string representation of the container element.
+
+* `context` [Hash] **(frozen)**
+ The scraper's `options` along with a few additional keys: `:base_url`, `:root_url`, `:root_page` and `:url`.
+
+* `result` [Hash]
+ Used to store the page's metadata and pass back information to the scraper.
+ Possible keys:
+
+ - `:path` — the page's normalized path
+ - `:store_path` — the path where the page will be stored (equal to `:path` with `.html` at the end)
+ - `:internal_urls` — the list of distinct internal URLs found within the page
+ - `:entries` — the [`Entry`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/models/entry.rb) objects to add to the index
+
+* `css`, `at_css`, `xpath`, `at_xpath`
+ Shortcuts for `doc.css`, `doc.xpath`, etc.
+
+* `base_url`, `current_url`, `root_url` [Docs::URL]
+ Shortcuts for `context[:base_url]`, `context[:url]`, and `context[:root_url]` respectively.
+
+* `root_path` [String]
+ Shortcut for `context[:root_path]`.
+
+* `subpath` [String]
+ The sub-path from the base URL of the current URL.
+ _Example: if `base_url` equals `example.com/docs` and `current_url` equals `example.com/docs/file?raw`, the returned value is `/file`._
+
+* `slug` [String]
+ The `subpath` removed of any leading slash or `.html` extension.
+ _Example: if `subpath` equals `/dir/file.html`, the returned value is `dir/file`._
+
+* `root_page?` [Boolean]
+ Returns `true` if the current page is the root page.
+
+* `initial_page?` [Boolean]
+ Returns `true` if the current page is the root page or its subpath is one of the scraper's `initial_paths`.
+
+## Core filters
+
+* [`ContainerFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/container.rb) — changes the root node of the document (remove everything outside)
+* [`CleanHtmlFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_html.rb) — removes HTML comments, `