This PR adds the following documentation improvements to fix https://github.com/certbot/certbot/issues/7958:
- Simplify building external plugins
- Separate out certbot snap instructions from plugin instructions
- Mention that dnsimple is just an example for the plugin instructions
- Mention remote build for other architectures
- Mention snap doc exists elsewhere in developer guide (`contributing.rst`)
* Set up generate_dnsplugins_all.sh for all files and parametrize snapcraft and postrefreshhook files
* Create constraints file in the generate_dnsplugins_all script
* Separate out plugin and certbot snaps and update instructions
* Add remote build instructions
* Add pointers to the README to contributing.rst
Fixes#8355
During the troubleshooting of #8355, I came to the conclusion that using buildkit was creating the problem. Without it all docker images are built correctly. Initially buildkit was enabled to avoid a building problem in Azure Pipeline, but I also found in my recent tests that this problem was not there anymore.
You can find more details about the troubleshooting and reasoning in #8355.
As a consequence, I disable the usage of buildkit in this PR which will solve the issue.
Fixes#8202
This PR adds an Azure Pipeline job to execute certbot plugins --prepare for each Docker image created during the CI on amd64.
* Prepare basic integration tests for certbot dockers
* Add a displayName for the integration tests task
* Add timeout to DNS query function calls
* Modify tests to account for new timeout variable
* Add change to CHANGELOG
* Add `dns.exception.Timeout` to exception handler
* Move changelog to 1.10.0
Fixes https://github.com/certbot/certbot/issues/8171.
See the comment at the top of the script to learn how to set things up and run this. Running the script between releases will have no effect on our snaps and it should fail when creating the GitHub release. The latter is described at https://github.com/certbot/certbot/pull/8189#discussion_r466707114.
* Rename create_github_release to finish_release
* Add initial version of snap release automation.
* Handle snapcraft login.
* Catch OSError raised when snapcraft doesn't exist.
* Update documentation.
* Only publish the Certbot snap for now.
* Fix typo.
* Document other exceptions.
* Document assertion
* Add status message before getting revisions.
* Publish all snaps.
With more and more of our wildcard instructions on https://certbot.eff.org telling people to use these plugins, I think we should get ready to move our DNS plugins to the stable channel. This PR removes grade: devel so the snap store doesn't prevent us from doing that when we want to. See #8128 where we did this to the Certbot snap for more info.
You can see the snap tests passing with this change at https://dev.azure.com/certbot/certbot/_build/results?buildId=2797&view=results.
Fixes https://github.com/certbot/certbot/issues/8292.
This uses the same approach that worked well for us in https://github.com/certbot/certbot/pull/7926. I'm sure we could delete more code or refactor things here, but I think we should make the most conservative changes we can to certbot-auto until we can just delete the entire thing.
I ran the full test suite on these changes at https://dev.azure.com/certbot/certbot/_build/results?buildId=2773&view=results and manually tested things on OpenSUSE and it worked as expected. certbot-auto refused to create new installations and refused to update old ones while continuing to allow the old version of Certbot to run.
* Deprecate cb-auto outside of Debian and RHEL.
* Don't deprecate Amazon Linux yet.
Partial fix for #8280
This PR refactors the bash script wrapper for snap (`/certbot.wrapper`) into certbot python codebase. Here are the keypoints of this refactoring:
* the wrapping is applied when `main` function from `certbot._internal.main` is called if environment variable `CERTBOT_SNAPPED` is `True`, which is set during the snap build
* the initial bash script wrapper is removed, simplifying `snap/snapcraft.yaml` by removing the `certbot.wrapper` part
* the dependency to `curl` and `jq` binaries are removed
* the failure during requesting the snapd socket is correctly handled, and displays an informative message in order to correct the situation, as required by #8280
One side note about the modifications done to `app.certbot.command` in `snapcraft.yaml`. Normally calling `bin/certbot` should be sufficient and it is effectively under a normal situation (`core` snap up-to-date). However in the same situation than when the problem occurs in #8280, using `bin/certbot` makes the snap raise an exception about `certbot.main` module that cannot be found.
It seems that when `core` snap is not up-to-date (in Debian for instance with default `snapd` installation), the shebang `/usr/bin/env python3` in the `bin/certbot` wrapper is wrongly resolved to the host Python, instead of the snap Python. It is working as expected if `core` snap is up-to-date. One way to fix that is to keep a bash script wrapper, because in this case, it is the `PATH` value that matters to resolve the Python interpreter, and `PATH` is correctly set up to resolve it from the snap first.
However to keep the simplification provided by the wrapper removal, I prefered to use `bin/python3 $SNAP/bin/certbot` as `command` to explicitly target the correct Python interpreter. Again normally it is not needed because everything is working correctly with a `core` snap up-to-date, but since the root purpose of all of this is to target bad situations, well, it is better to have a snap that is effectively able to start to display the informative message...
* Refactor the bash wrapper for snap execution as Python code into certbot
* Remove wrapper, finalize the python logic
* Organize code
* Improve error handling
* Update command
* Setup basic certbot logging before running the snap prepare logic
* Improve instructions
* Use logging facility
* Handle properly an exception in snap_config
* Use the python script call approach
* Update instructions to keep sync with https://github.com/certbot/website/pull/650
This reverts commit feca125437.
Since this change landed, ARM builds for many of the DNS plugins have failed every night. See https://dev.azure.com/certbot/certbot/_build?definitionId=5 or our public Mattermost channel.
I quickly tried to fix this myself and wasn't trivially able to do so. I tried setting `SNAPCRAFT_PYTHON_VENV_ARGS: --system-site-packages` and adding `python3-wheel` as a build dependency, but it didn't work for some reason. The `python3-wheel` package didn't seem to be installed.
I still suspect something like this is the approach we should take, however, I want to fix the failing tests now so things are no longer broken in `master` and those of us on the Certbot team at EFF stop getting spammed with 54 (!!) emails about failed builds from launchpad every night.
Unfortunately, while I was working on this the queue for ARM machines on Launchpad jumped up to an estimated ~20 hour wait, but I confirmed that this fixes the problem by building on an ARM AMI using the instructions at https://github.com/certbot/certbot/blob/master/tools/snap/README.md#use-testing-and-development. If whoever reviews this would like an ARM machine to test on themselves, please let me know.
* add set -e to all bash instances in deploy-stage.yml
* retry uploading snap if we fail
* Add the rest of the set -e calls for bash in azure while we're here
* use retry based on travis_retry
* add set -e to the script: sections that run on macOS/Linux
* actually don't fail on result
* reset result before running command because bash short circuits or conditionals
* remove inapplicable comment
Partial fix for #8256
This PR makes tox calls pipstrap before any commands is executed, and Azure Pipelines calls pipstrap when appropriate (when an actual call to pip is done).
* Invoke pipstrap in tox and during the CI
* Set default value for PYTHON_VERSION and always set python interpreter
* Set Python for snaps_build also
* Fix the build for Windows installer
* Add a warning comment for pinned versions in pipstrap
* Rebuild letsencrypt-auto
* Same version than the installer build
* Let's update to latest pip for installer tests
The ErrorHandler context manager could produce very verbose CLI output
when handling long exception chains (PIP 3134 enhanced reporting).
Rather than logging every exception with its traceback to the CLI, this
commit changes ErrorHandler so that only the final exception in the
chain, without traceback, is logged to the CLI.
This is consistent with a previous change made in the global except
hook (#8000).
* Fixed a few linting warnings for if not x in y.
These should have been caught by pylint, but weren't.
* Replaced "x in y.keys()" with "x in y".
It's much faster, and more Pythonic.
* nginx: reduced CLI logging when reloading nginx
Hides the output of `nginx -s reload` from the CLI, moving it to
debug-level logging.
Additionally, fixes an issue where Certbot did not properly capture the
output of the nginx reload and restart commands.
Fixes#8231
* remove leftover debugging
* reorder CHANGELOG
* don't use bare asserts
random25863.example.org appears in multiple port 80 virtualhosts in the
nginx testdata tarball and also is in the nginx-roundtrip-testdata.
Certbot doesn't handle these properly, which results in random test
failures.
This commit ensures that random25863.example.org only appears in a
single virtualhost and should ensure that the tests pass consistently.
Fixes#7585
This PR removes the specific configuration to configure the test runner included in `setuptools` to use pytest, the deprecated parameters related to setuptools testing in `setup.py`, and update the packaging guide to use `python -m pytest` instead of `python setup.py test`.
The farm test `test_sdist.sh` is also updated to use directly pytest. This test is designed to reproduce the steps used by OS integrators when they package `certbot`, and ensure that we are not breaking something that will impact their work. We discussed with integrators from RHEL/CentOS and Debian, and they are fine with us testing sdist directly with pytest.
One execution of the `test_sdist.sh` farm test with the modifications made by this PR can be seen here: https://dev.azure.com/certbot/certbot/_build/results?buildId=2606&view=results
* Remove setuptools deprecated features about testing
* Updating packaging guide
* Add changelog entry
Fixes https://github.com/certbot/certbot/issues/8162.
I had to update the base of the Dockerfile to get a new enough version of Python 3. I also simplified things a lot and removed a lot of the comments that were essentially just describing how Dockerfiles work.
The most complicated changes here are in `testdata`. You can find a diff of the changes to `nginx.tar.gz` at https://gist.github.com/c7727db0cecf3f15f02439f085c73848.
The first problem was that there were some complaints from the new Apache/nginx/OpenSSL version about the 1024 bit RSA key so I updated `empty_cert.pem` both inside and outside of the tarball as well as the corresponding private key in the tarball to use a 2048 bit key.
The 2nd problem is trickier to understand. If you look at the output from nginx after loading the config from `lots/` you'll see it complaining about conflicting `server_name` directives for the directives I deleted. See https://dev.azure.com/certbot/certbot/_build/results?buildId=2578&view=logs&j=250aa146-b243-5f8f-bf86-17a529c9fb7e&t=9baa2014-9673-5e78-8f4f-7a463caf2bfa&l=1516.
After switching the tests to Python 3, tests on that domain started failing. What I believe to be happening is we were just lucky these tests were passing to begin with. In both the Apache and Nginx plugin, if there are conflicting virtual hosts like this, we just arbitrarily pick one. The relevant code here for nginx is 575092d603/certbot-nginx/certbot_nginx/_internal/configurator.py (L455)
I played around with a debugger and confirmed that before I removed the conflicting server names, there were two exact matches for the domain we were searching for here.
I think all that's going on is with the switch to Python 3, the vhost we happen to choose changes and "breaks" the test. I suspect this to be due to something like getting values out of a dict somewhere where the order of items in a dict while iterating over it is different between Python 2 and 3. I didn't track where this difference happens down, but I personally don't think it's a good use of time since I think the real problem here is that the nginx config being tested was invalid with conflicting `server` blocks.
I removed all references to the `server_name` causing conflicts in that nginx configuration because both server blocks had other domains that are being tested, but I could add either back if you prefer. You can see the `nginx_compat` test passing with these changes at https://dev.azure.com/certbot/certbot/_build/results?buildId=2587&view=logs&j=250aa146-b243-5f8f-bf86-17a529c9fb7e.
* update Dockerfile
* Fix apache_compat on py3.
* Update empty_cert.pem.
The command used here was `openssl req -key
certbot/certbot/tests/testdata/rsa2048_key.pem -new -subj '/CN=example.com'
-x509 >
certbot-compatibility-test/certbot_compatibility_test/testdata/empty_cert.pem`.
* update nginx.tar.gz
* Remove conflicting server_names
This commit fixes an issue with the nginx parser where it would perform
case-sensitive matching against server_name.
This would cause the authenticator and installer to ignore existing
virtualhosts containing uppercase characters, resulting in duplicate
virtualhosts and broken configurations.
"Exact" and "wildcard" matching is now case-insensitive. Regex-based
matching will continue to respect the case mode of the pattern.
Fixes#6776.
I'm adding this comment as part of the resolution of #8251. I think rewriting the script in Python is something we really should only worry about if we're working on the script in the future. Because of this, I personally prefer a code comment rather than an issue here.
In order to avoid potentially breaking changes in the snap CLI on the
host, this commit changes certbot.wrapper to use the snap REST API (via
curl and jq) to list connected Certbot plugins.
* snap: Fix "stack smashing" error in wrapper
certbot.wrapper had implicit dependencies on sed, awk and coreutils,
which were being accidentally provided through the host system. Because
certbot.wrapper modifies LD_LIBRARY_PATH, this was causing some systems
to load an incompatible combination of shared libraries, resulting sed
crashing.
This commit reduces the dependencies of this script to just gawk, and
explicitly stages it as part of the Certbot snap.
It additionally moves invocations of all host system programs to a
moment prior to the modification of LD_LIBRARY_PATH, and the invocation
of snapped programs to after the modification.
Fixes#8245
* snap: Don't modify LD_LIBRARY_PATH
* leftover tracing
* snap: revert curl/jq in wrapper, use gawk for now
Fixes#8252
With @bmw we digged quite a lot on why the failure happens on ARM snap, and here we what we understood:
* the failure occurs since the version 50 of setuptools is available
* normally, we should not be impacted because the setuptools version used in the snap build should be the one installed by the `core20` base snap, because the build occurs in a `venv` created with `--system-site-packages`
* BUT associated with the build isolation provided by recent versions of pip (to implement PEP 517), a bad interaction happens: following the definition of the build system provided by `cryptography`, pip installs the most recent version of setuptools on a separate path for the build (because `cryptography` just asks for a minimal version of `setuptools`), then features of this version conflict with the old version of `setuptools` initially present
* the exact interaction is described here: https://github.com/pypa/pip/issues/6264#issuecomment-685230919. Basically the new version of `setuptools` triggers some hacks, that are then applied at runtime on the old version of `setuptools` that is also still available in `sys.path` at this point, and breaks the build.
To fix that, one can disable the isolation build on cryptography, by passing `PIP_NO_ISOLATION_BUILD=no` to pip. It is the purpose of this PR.
This will have the consequence to not be PEP 517 compliant: if needed the `cryptography` library will be built using the `setuptools` available in the system. In general I think it makes sense for the snap build purpose, since we control precisely the build environment, and makes consistent build that will not be broken by a new version of a build system if library maintainers did not provide a strict version of it in their build requirements. However we need now to take care about having a compatible build system for all libraries that may have specific requirements in their build system using the PEP 517 definition in `pyproject.toml`.
I think as of now that it is a safe move if we keep using the most recent version of `setuptools` available in Ubuntu 20.04, and it is the case here for snap builds. It may however be problematic if some libraries require another build system than `setuptools` and do not provide a fallback to a `setuptools` build. For the record, `dns-lexicon`, that I maintain, uses `poetry` and so a PEP 517 compliant definition of a build system, but provides also this fallback (https://github.com/AnalogJ/lexicon/blob/master/setup.py).
Full test suite compiling the snaps for the 3 architectures using this PR is available here: https://dev.azure.com/certbot/certbot/_build/results?buildId=2596&view=results
Fixing a mistake in pull request #8212 where I recorded my changes in an already released version 😳.
- Moving new changes out of a previous changelog and into the next
releases' changelog
* Allow user to remove email using update command
Fixes#3162. Slight change to control flow to replace current email
addresses with an empty list. Also add appropriate result message when
an email is removed.
* Update ACME to allow update to remove fields
- New field type "UnFalseyField" that treats all non-None fields as
non-empty
- Contact changed to new field type to allow sending of empty contact
field
- Certbot update adjusted to use tuple instead of None when empty
- Test updated to check more logic
- Unrelated type hint added to keep pycharm gods happy
* Moved some mocks into decorators
* Restore default to `contact` but do not serialize
- Add `to_partial_json` and `fields_to_partial_json` to Registration
- Store private variable noting if the value of the `contact` field was
provided by the user.
- Change message when updating without email to reflect removal of
all contact info.
- Add note in changelog that `update_account` with the
`--register-unsafely-without-email` flag will remove contact
from an account.
* Reverse logic for field handling on serialization
Now forcably add contact when serilizing, but go back to base `jose`
field type.
* Responding to Review
- change out of date name
- update several comments
- update `from_data` function of `Registration`
- Update test to remove superfluous mock
* Responding to review
- Change comments to make from_data more clear
- Remove code worried about None (omitempty has got my back)
- Update test to be more reliable
- Add typing import with comment to avoid pylint bug
Fixes https://github.com/certbot/certbot/issues/8165.
I moved `prerequisites` up to the "Running a local copy of the client" `contributing.html#prerequisites` still links to information about installing Cerbot's dependencies.
I left all certbot-auto documentation that wasn't explicitly encouraging its use. I think we can rip that out once the script is deprecated.
* Create bootstrap script
* Delete a whole bunch of the bootstrap script
* modify test_tests to use new script
* put python version checking in back in
* add x
* call the venv creation from inside the bootstrap
* add targets back
* modify test_apache2 to use new format
* shouldn't need virtualenv on rhel
* readd targets
* Update test_sdists to use new script
* move setting up venv back out of script so it's not run with sudo
* take venv3.py call out of bootstrap in all scripts
* add additional python3-devel pkg name
* fix test_sdists
* enable additional rhel7 repos
* clean up code and comments
* Update tests and instructions to use auto_targets.yaml with test_leauto_upgrades.sh and test_letsencrypt_auto_certonly_standalone.sh
* only install python3-devel.x86_64 for rhel7
* Upgrade python version for debian in test_apache2.sh
* don't run test_tests or test_sdists on debian 9 or ubuntu 16.04
* Add 20.04 and 20.04 arm images to targets.yaml
* use pyenv to upgrade to python3.5
* remove arm64 instance because it's having auth trouble
* correct pyenv usage on ubuntu
* add arm64 target to targets.yaml
* replace debian 9 arm64 with ubuntu 20
* don't try to upgrade a perfectly good python version
* let's just add ubuntu20 to apache2_targets while we're here
* uncomment test_apache2
* move adding python3-devel.x86_64 to bootstrap_os_packages to avoid potential race condition
* no need to specify the arch once extra rhel7 repos enabled
* explicitly specify python3
* don't fail if we can't enable rhel7 extras
* capture python36-devel as well
Fixes https://github.com/certbot/certbot/issues/8022, https://github.com/certbot-docker/certbot-docker/issues/25, and https://github.com/certbot-docker/certbot-docker/issues/20.
This PR builds on https://github.com/certbot/certbot/pull/8192 to set up similar builds in Azure to what we currently have at release time as well as nightly builds allowing us to catch problems in these images before a release. It also fully automates our Docker deployments removing a manual step from our release process. We'll need to update our release instructions once this PR lands.
If you're not familiar with our `certbot-docker` setup, you can read about how these scripts customized the build process on Docker Hub at https://docs.docker.com/docker-hub/builds/advanced/.
You can see the process working properly at:
* Nightly build on my fork: https://dev.azure.com/bmw0523/bmw/_build/results?buildId=345&view=logs&j=70ac378a-cb1f-50d1-b328-169807afbcfa
* Release build on my fork: https://dev.azure.com/bmw0523/bmw/_build/results?buildId=346&view=logs&j=70ac378a-cb1f-50d1-b328-169807afbcfa
* Nightly build on Certbot's Azure setup: https://dev.azure.com/certbot/certbot/_build/results?buildId=2426&view=logs&j=70ac378a-cb1f-50d1-b328-169807afbcfa
The builds on my fork pushed to https://hub.docker.com/u/certbottest. The credentials for this account are in our shared vault in 1password if you want to play around with this.
While the scripts will (almost?) always be run in CI, I tested the scripts successfully on macOS, Ubuntu 18.04, and Ubuntu 20.04, however, **the scripts do not seem to work when using the Docker snap, at least on Ubuntu 20.04.** It does work with the `docker.io` packages from `apt`. I was able to make things work by no longer setting `DOCKER_BUILDKIT`, but as I described in the code comments, this breaks things on Azure.
When writing this PR, I tried to make the minimal modifications to our current set up to get the behavior we want. I'm planning on working on splitting the Docker builds into different Azure jobs so it doesn't increase the overall build time, but this isn't trivial so I figured it would be best done in a separate PR.
* Remove license.
* update build scripts
* write deploy code
* Remove unused READMEs.
* rewrite readme
* Make testing on a fork easier.
* Set up Azure automation.
* fix typo
* Make output more verbose.
* clean up cleanup...everybody everywhere
* separate build and deploy
* Document docker-hub credentials
* Use Docker BuildKit when building.
* Remove unneeded .gitignore files.
* Fix tools/docker/README.md grammar
Co-authored-by: ohemorange <ebportnoy@gmail.com>
* Clarify <TAG> in README.
* no docker snap
* rename docker job
Co-authored-by: Erica Portnoy <ebportnoy@gmail.com>
* Improve github release creation process
* Comment file
* Update tools/create_github_release.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* run chmod +x on tools/create_github_release.py
* Add description of create github release method
* remove references to unnecessary azure credential
* remove unnecessary import
* Add reminders to update other file to definitions in .azure-pipelines
* Raise an error if we fail to fetch the artifact from azure
* Create github release as a draft, upload artifacts, then un-draft, for hooks to be run at the right point
* get the version number from the release
* add new packages to dev3_extras so they're installed by tools/venv3.py
* remove unnecessary import
* fun fact: tempdirs behave differently when used as a context manager
* Move comment to construct.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
It seems my old instruction isn't required any longer for Gentoo. To be honest, I don't have a clue since when, but my own Gentoo server isn't even using the workaround mentioned currently in the documentation at the moment. So it seems the Apache plugin works just fine without this workaround 🤦
Also, the Gentoo repository obviously also includes the nginx since a long time. I guess my original text is ancient.. It also includes *one* of the many DNS plugins, with a different maintainer than the other "main" packages. It currently only has version 0.39.0, so I don't have a clue if it's being maintained officially.
* Remove obsolete Gentoo instructions and add packages.
* Capitalize note
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* nginx: add --nginx-sleep-seconds
As described in #7422, reloading nginx is an asynchronous process and
Certbot does not know when it is complete. In an environment where this
reload takes a long time, the nginx plugin suffers from an issue where
it responds to and fails the ACME challenge before the nginx server is
ready to serve it.
Following the discussion in a previous PR #7740, this commit introduces
a new flag, --nginx-sleep-seconds, which may be used to increase the
duration that Certbot will wait for nginx to reload, from its previously
hard-coded value of 1s.
Fixes#7422
* update CHANGELOG
* nginx: update docstring for nginx_restart
Fixes#8169
This PR improves snaps remote builds script by dumping the output of `snapcraft remote-build` when unexpected behavior is detected:
* when all builds for a project finish with a zero status code, and none of them are marked as failed, we expect to have all the associated snap files available locally.
* when some builds are marked as failed, we expect to have a build output for each of them available locally.
In these two situations, if the expectation are not matched, then the script will display the output of `snapcraft remote-build` itself. I added also a control error to handle nicely the absence of an expected build output on the local machine.
* Improve log dump in snaps remote builds when an unexpected behavior is detected
* Use the manager
* Update tools/snap/build_remote.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#7863.
Connect command is `sudo snap connect certbot-dns-dnsimple:certbot-metadata certbot:certbot-metadata`
Logs are `cat /var/snap/certbot-dns-dnsimple/current/debuglog`
Echos in hook are only printed to terminal when it exits 0; otherwise, check logs in `debuglog` mentioned above.
Manual tests include all iterations of connected, unconnected, installed for the first, second time, etc, with passing and failing version checks.
* Make dnsimple not update if certbot is too old
* create an interface to read cb version
* add missing newline
* fix syntax
* trying to figure out the consumer syntax
* trying to figure out the consumer syntax, again
* only check post first install
* valid setting name
* test for first install differently
* snapctl doesn't error if it fails I guess
* time to do some print debugging
* continue playing with syntax
* once again, fooled by bash int vs string comparisons!
* debugging
* if we use post and pre together we can do this
* is this how content interface syntax works
* it's a directory?
* more debug
* what's that error message again?
* try other syntax
* if it's not documented just guess at syntax
* actually, I think this is the syntax
* oops didn't set for new hook
* test passing information along connection
* interface attributes can only be set during the execution of prepare hooks
* just do it with main connection
* undo last few test changes
* Add some printing to make sure we understand what's going on
* create empty directory to bind to
* put mkdir in the correct part
* let's inspect the environment
* it can't run bash directly.
* perhaps only directories can be shared via the contente interface
* update name of folder
* echo to debug log to understand what's going on exactly. we have file access though!
* update grep for new file
* more printing
* echo to the debug log
* ok NOW all print statements are going to the log
* why does echo need two >s
* remove unnecessary extra check, just check if the init file is available
* check if certbot version will be available post-refresh after all
* pre-refresh hook is not necessary to get certbot version
* update mkdir so we don't have to clean each time
* try comparing version numbers in python
* it's python3
* we need different prints for if we succeed or if we fail.
* improve bash syntax
* remove some debugging code
* Remove debug script
* remove spaces for clarity
* consolidate parts and remove more test code
* s/certbot-version/certbot-metadata/g
* use sys.exit instead of exit
* find and save certbot version on the certbot side
* change presence test to new file
* switch to using packaging.version.parse instead of LooseVersion
* switch to requiring certbot version >= plugin version
* add plugin snap changes to generate script
* Add comment to generation file saying not to edit generated files manually
* Create post-refresh hook for all plugins with script
* generate files using new script
* update snapcraft.yaml files for plugins
* bin/sh comes first
* Add packaging to install_requires
* Check that refresh is allowed in integration test
* switch plug and slot names in integration test
* Update tools/generate_dnsplugins_postrefreshhook.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* small bash fixes
* Update snap readme with new instructions
* Run tools/generate_dnsplugins_postrefreshhook.sh
* Update tools/snap/generate_dnsplugins_postrefreshhook.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Snapcraft has a feature name `remote-build`. It allows to compile snaps using the Canonical dedicated build architecture for several architectures. Compared to the QEMU-enabled Docker approach used currently, the remote build has several advantages:
* the builds are done on the native architecture, making them basically faster than what can be achieved on QEMU
* it avoids to depend on `adferrand/snapcraft` (which could be otherwise be fixed with the merge of https://github.com/snapcore/snapcraft/pull/3144, but this will not happen in the short term)
* when everything is good, all snaps build can be run in parallel and then can be orchestrated by one single Azure Pipeline job, since the heavy tasks are done remotely.
This PR makes the necessary ajustements to use the remote build feature instead of the QEMU-enabled docker approach.
One complex task was to be able to compile the `certbot` snap on `arm64` and `armhf`. Indeed on these architectures the pre-compiled wheel for `cffi` is not available. So it needs to be compiled during the snap build. Sadly, the current version of the python plugin in snapcraft is limited by the fact that `wheels` is not installed in the virtual environment set up to build the python packages, and there is no easy way to change that except by overridding the whole build process.
In the long term, I think I will open a PR on `snapcraft` Git repository to provide a consistent solution. But for the short term, I used the possibility to provide arguments to the `venv` module, to add the flag `--system-site-packages`. With it, the virtual environment can use the system site package, where `wheel` is available.
The other significant additions are in `tools/snap/build_remote.py` script. If invoking the remote build on a local machine is quite straight-forward, it is another story on the CI because we need build auditability and resiliency during these non-interactive actions. In particular we should avoid as possible inconsistent results on the nightly pipeline and the release pipeline.
So this script wraps the `snapcraft` call into a retry logic, and improves its logs in the context of parallel builds.
For the minor modifications, it is mainly about ensuring that plugins can be built (some of them also need `cffi` for instance), and simplify the Azure Pipeline since all snaps are retrieved in one go.
Please note that the `test-` branches still run only the `amd64` architecture. Indeed I noticed that builds on `arm64` and `armhf` are tending to be very slow to start (up to 40 min) while the `amd64` ones wait at max 10 mins, and usually 30 seconds only when the overall load on Canonical side is low.
To work on `certbot/certbot` repository, one secured file needs to be added, because `snapcraft` needs to be authenticated against Launchpad with credentials allowing remote builds. To do so, from a local machine that have this capability, one can extract the existing file at `$HOME/.local/share/snapcraft/provider/launchpad/credentials`, and register it as a secured file in Azure Pipeline with the name `snapcraftRemoteBuildCredentials`.
* Define scripts
* Setup pipeline to use remote builds
* Focus on packaging builds
* Set credentials
* Setup git
* Launch all builds in parallel
* Add dev dependencies to build cffi and cryptography
* Convert to a python logic
* Reorganize the pipeline
* Handle the fact that snap builds may be taken from cache
* Generate constraints
* Exit code
* Check existence
* Try to handle better non zero exit code
* Add --system-site-packages to get wheel in the venv
* Add executable permissions
* Troubleshoot
* Dynamic display, take the maximum timeout for snap build job
* Allow retries if the remote build does not start
* Trigger only amd64 builds for test branches
* Exit properly
* Update snapcraft.yaml
* Fix snap run
* Set secured file name
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Move order in deps
* Reactivate all builds
* Use Manager() as a context manager
* Use Pool as a context manager
* Some nice refactorings
* Check snapcraft execution interruption with exit codes
* Use f-string and format expressions
* Start log
* Consistent use of single/double quotes
* Better loop to extract lines
* Retry on build failures
* Few optimizations
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#8149
This PR adds warnings to warn about the incoming deprecation of Python 3.5 in Certbot.
* Add warnings about Python 3.5 deprecation in Certbot
* Update certbot/certbot/__init__.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
«When you add or change DNS zones or records, your changes will now be
reflected at our authoritative nameservers in under 60 seconds. This is
down from the previous “every quarter hour” approach that we had for so
long.» - https://www.linode.com/blog/linode/linode-turns-17/
Fixes#4351
This PR proposes a solution to use the third party plugins with the prefix `pip_package_name:` in the plugin name, plugin specific flags and keys in dns plugin credential files.
A first solution has been proposed in #6372, and a more advanced one in #7026. In #7026 was also added a deprecation warning when the old plugin name `pip_package_name:plugin_name` was used.
However there were some limitations with #7026, in particular the fact that existing flags of type `pip_package_name:dns_plugin_option` or keys like `pip_package_name:key` in dns plugin credential files were not read anymore. This would have led to silent failures during renewals if the configuration was not explicitly updated by the user.
I tried to fix that based on #7026, but the changes needed are complex, and create new problems on their own, like unexpected erasure of values in the renewal configurations.
Instead I try in this PR a new approach: the `PluginsRegistry` in `certbot._internal.plugins.disco` module register two plugins for a given entrypoint refering to a third party plugin when `find_all()` is called:
* one plugin with the name `plugin_name`
* one plugin with the name `pip_package_name:plugin_name` (like before)
This way, every existing configuration continues to work without any change (credentials, renewal configuration, CLI flags). And new configurations can refer to the new plugin name without prefix, and use the approriate CLI flags, credentials without this prefix.
On top of it I added the deprecation path given in #7026 (thanks @coldfix!):
* the plugin named `pip_package_name:plugin_name` is hidden from `certbot plugins` output
* the help for this plugin is still displayed, and a deprecation warning is displayed in the description
* when invoked, the same deprecation warning is displayed in the terminal
* Support both prefixed and not prefix third party plugins
* Adapt tests
* Add deprecation path
* Named parameters
* Add deprecation warning in CLI
* Add a changelog
Fixes#8041
This PR makes Azure Pipeline build the DNS plugins snaps for the 3 architectures during the CI.
It leverages the existing logic for building the Certbot snap in order to deploy a QEMU environment with Docker, and leverages the local PyPI index to speed up the build when installing `cffi` and `cryptography`.
All DNS plugins snaps are constructed in one unique docker container, in order to save the time required to install the system dependencies upon first start of `snapcraft`, and so speed up significantly the build.
Finally, all `amd64` DNS plugins snaps are built within 6 minutes. For `arm64` and `armhf`, it is around 40 mins: this is quite fast in fact, considering that 14 DNS plugins snaps are built.
However, this is still an extremely heavy task to make the full 3 architectures builds, even for Azure Pipelines and its 10 parallel jobs capability. That is why I make the `arm64` and `armhf` builds be skipped for the `full-test-suite`, and let them run only for `nightly` and `release`. This means however that these builds will not be done for the release branches. If this is a problem, I can put a more elaborate suspend condition to triggers the builds in this case.
All snaps are stored in the pipeline artifacts storage, making them available for publication during a `release` pipeline.
The PR is set as Draft for now, because I use temporarily `pr_test-suite` to validate the packaging jobs when commits are pushed. Once the PR is ready, I will revert it back to the normal configuration (run the standard tests).
* Configure a script to build DNS snaps
* Focus on packaging
* Trigger all architectures
* Add extra index
* Prepare conditional suspend
* Set final suspend logic
* Set final suspend value
* Loop for publication
* Use python3
* Clean before build
* Add a test
* Add test job in Azure
* Preserve env
* Apply normal config for pipelines
* Skip QEMU jobs only for test branches
* Makes snap run tests depends also on the Certbot snap build
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/stages/deploy-stage.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* More accurate way to get the plugin snap name
* Integrate DNS snap tests into certbot-ci
* Fixes
* Update certbot-ci/snap_integration_tests/conftest.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update certbot-ci/snap_integration_tests/conftest.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Clean an _init_.py file
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
People who are considering running Certbot with Docker are probably doing so because their webserver is to be run with Docker. These changes to the README should help them to understand that doing so will require knowledge of Docker volumes and that the architectural justification for running Certbot in a separate container is the "one service per container" best practice.
Short PR to improve some things during snap builds:
* cleanup snapcraft assets before a build, in order to avoid some weird errors when two builds are executed consecutively without cleanup
* use python3 explicitly in `tools/simple_http_server.py` because on several recent distributions, `python` binary is not exposed anymore, only `python2` or `python3`.
If you go to a URL like https://snapcraft.io/certbot/releases and try to move the Certbot snap into the candidate or stable channels, you cannot do so. There is a tooltip which says that revisions with the grade devel cannot be promoted to candidate or stable channels.
The documentation for `grade` can be found at https://snapcraft.io/docs/snapcraft-yaml-reference where it says the value is optional and
> Defines the quality grade of the snap.
Type: enum
Can be either devel (i.e. a development version of the snap, so not to be published to the stable or candidate channels) or stable (i.e. a stable release or release candidate, which can be released to all channels)
Example: [stable or devel]
I'm working on a proposal for our next steps for snaps which involves moving the Certbot snap to the stable channel. I of course won't make those changes without giving others a chance to share their opinion, but I'd like to avoid the situation where we're technically unable to move the Certbot 1.6.0 snap to the stable channel despite wanting to do so.
I started to make the same changes to the DNS plugins, but I personally think it's too soon to propose stable versions of those yet and `grade` is a simple way to ensure we don't accidentally promote something there.
You can see the snap being built and run successfully with this change at https://dev.azure.com/certbot/certbot/_build/results?buildId=2246&view=results.
Fixes#8071 and fixes https://github.com/certbot/certbot/issues/8110.
This PR migrates every job from Travis in Azure Pipeline.
This PR essentially converts the Travis jobs into Azure Pipeline with a complete iso-fonctionality (or I made a mistake). The jobs are added in the relevant existing pipelines (`main`, `nightly`, `advanced-test`, `release`). A global refactoring thanks to the templating system is done to reduce greatly the verbosity of the pipeline descriptions.
A specific feature (not present in Travis) is added: the stage `On_Failure`. Using directly the Mattermost API, it allows to notify pipeline failure in a Mattermost channel with a link to the failed pipelines without the need to authenticate to Microsoft.
See https://github.com/certbot/certbot/pull/8098#issuecomment-649873641 for the post merge actions to do at the end of this work.
Fixes#7420.
* Set up CentOS 8 test farm tests
* Don't add to apache2_targets until 7273 is resolved
* Start upgrade test from a version that works on centos 8
* remove when possible from targets
* acme: add support for alternative cert. chains
* certbot: add --preferred-chain
* remove support for issuer SKI matching
* show --preferred-chain in "run" help
* warn if no chain matched and it's not a dry-run
* fix existing failing tests
* add unit, integration tests
* bump acme dependency to dev version
* simplify test to avoid py2.7 recursion bug
* add preferred_chain to STR_CONFIG_ITEMS
* reduce preferred_chain warning to info level
* acme: fix some docstrings in .messages
* certbot: fix docstring in crypto_util
* try to fix certbot-nginx acme dep problem
Fixes#8093.
This PR modifies and audits all uses of `subprocess` and `Popen` outside of tests, `certbot-ci/`, `certbot-compatibility-test/`, `letsencrypt-auto-source/`, `tools/`, and `windows-installer/`. Calls to outside programs have their `env` modified to remove the `SNAP` components of paths, if they exist. This includes any calls made from hooks, calls to `apachectl` and `nginx`, and to `openssl` from `ocsp.py`.
For testing manually, rsync flags will look something like:
```
rsync -avzhe ssh root@focal.domain:/home/certbot/certbot/certbot_*_amd64.snap .
rsync -avzhe ssh certbot_*_amd64.snap root@centos7.domain:/root/certbot/
```
With these modifications, `certbot plugins --prepare` now passes on Centos 7.
If I'm wrong and we package the `openssl` binary, the modifications should be removed from `ocsp.py`, and `env` should be passed into `run_script` rather than set internally in its calls from nginx and apache.
One caveat with this approach is the disconnect between why it's a problem (packaging) and where it's solved (internal to Certbot). I considered a wrapping approach, but we'd still have to audit specific calls. I think the best way to address this is robust testing; specifically, running the snap on other systems.
For hooks, all calls will remove the snap paths if they exist. This is probably fine, because even if the hook intends to call back into certbot, it can do that, it'll just create a new snap.
I'm not sure if we need these modifications for the Mac OS X/ Darwin calls, but they can't hurt.
* Add method to plugins util to get env without snap paths
* Use modified environment in Nginx plugin
* Pass through env to certbot.util.run_script
* Use modified environment in Apache plugin
* move env_no_snap_for_external_calls to certbot.util
* Set env internally to run_script, since we use that only to call out
* Add env to mac subprocess calls in certbot.util
* Add env to openssl call in ocsp.py
* Add env for hooks calls in certbot.compat.misc.
* Pass env into execute_command to avoid circular dependency
* Update hook test to assert called with env
* Fix mypy type hint to account for new param
* Change signature to include Optional
* go back to using CERTBOT_PLUGIN_PATH
* no need to modify PYTHONPATH in env
* robustly detect when we're in a snap
* Improve env util fxn docstring
* Update changelog
* Add unit tests for env_no_snap_for_external_calls
* Import compat.os
Fixes#8103.
* Update the DNS plugin generator script to core20 syntax
* Generate new snapcraft.yamls for the DNS plugins
* Update certbot.wrapper to search for python3.8 paths
Fixes#7957
This PR makes snapcraft generate itself the dependencies constraints file during the snap build, instead of having an external script that does it before calling snapcraft.
This line seems to refer to [augeas.py](https://github.com/hercules-team/python-augeas/blob/v0.5.0/augeas.py) from the version of Augeas we normally have pinned. This was necessary when we were installing each Certbot component in separate parts and only combining them later to ensure that the Augeas fork (which uses cffi) was used instead of the unmodified pinned version of Augeas.
Since everything is installed in one part and we're removing the Augeas pinning now though, this line is no longer necessary. You can see the snap being built and tested successfully with this change at https://travis-ci.com/github/certbot/certbot/builds/172134518.
* Do not unstage generated cffi augeas
* Add comment about deleted line.
According to `distutils/version.py`, StrictVersion is pretty strict in
what version numbers to accept:
> A version number consists of two or three dot-separated numeric
> components, with an optional "pre-release" tag on the end. The
> pre-release tag consists of the letter 'a' or 'b' followed by a number.
This assumption already fails for some pretty basic python libraries
itself, like setuptools, also available in `46.1.3.post20200610`, a
completely valid version number according to
https://www.python.org/dev/peps/pep-0440/#post-releases.
There doesn't seem to be a particular reason on why StrictVersion has
been used here, so let's use LooseVersion, to be compatible with these
versions.
Co-authored-by: Adrien Ferrand <adferrand@users.noreply.github.com>
* Base logic
* Various controls when email is None
* Adapt eff tests
* Forward compatibility
* Also for csr
* Explicit regr or meta updates in account objects
* Adapt logic to ask for eff subscription during registering
* Adapt tests
* Move dry-run control
* Add some relevant controls on handle_subscription call checks
This will allow DNS plugin snaps to build if they rely on unreleased acme/certbot, and remove other copy of Certbot from externally snapped plugins. Fixes#8064 and fixes#7946. Implementation is based on the design [here](https://github.com/certbot/certbot/issues/8064#issuecomment-645513120).
To test, see reverted commit 8632064. Steps taken:
- added changes to setup.py and snapcraft.yaml
- successfully snapped, connected, ran `sudo certbot plugins --prepare`
- added temporary changes to have both certbot and certbot-dns-dnsimple use DNSAuthenticator2
- snapped and installed certbot, `certbot plugins` failed as expected.
- snapped and installed certbot-dns-dnsimple, `sudo certbot plugins --prepare` succeeded
- Inspected dns plugin's `bin` and `lib`; no `certbot` or `acme`, as expected.
```
$ ls /snap/certbot-dns-dnsimple/current/lib/python3.6/site-packages/
OpenSSL future-0.18.2.dist-info requests_file.py
PyYAML-5.3.1.dist-info idna setuptools
_cffi_backend.cpython-36m-x86_64-linux-gnu.so idna-2.9.dist-info setuptools-47.3.1.dist-info
certbot_dns_dnsimple lexicon six-1.15.0.dist-info
certbot_dns_dnsimple-1.6.0.dev0-py3.6.egg-info libfuturize six.py
certifi libpasteurize tldextract
certifi-2020.4.5.1.dist-info past tldextract-2.2.2.dist-info
cffi pip urllib3
cffi-1.14.0.dist-info pip-20.1.1.dist-info urllib3-1.25.9.dist-info
chardet pkg_resources wheel
chardet-3.0.4.dist-info pyOpenSSL-19.1.0.dist-info wheel-0.34.2.dist-info
cryptography pycparser yaml
cryptography-2.8.dist-info pycparser-2.20.dist-info zope
dns_lexicon-3.3.26.dist-info requests zope.interface-5.1.0-py3.6-nspkg.pth
easy_install.py requests-2.23.0.dist-info zope.interface-5.1.0.dist-info
future requests_file-1.5.1.dist-info
$ ls /snap/certbot-dns-dnsimple/current/bin/
chardetect futurize lexicon pasteurize tldextract
```
- reset to HEAD^
- snapped and installed certbot to not have the DNSAuthenticator2 changes, `certbot plugins` failed as expected.
* Don't include certbot deps when EXCLUDE_CERTBOT_DEPS is set
* Set EXCLUDE_CERTBOT_DEPS in certbot-dns-dnsimple/snap/snapcraft.yaml
fix branch name
grammar
remove readme link
Remove links to Robie's repo.
say you cant use docker
Commands not command
Update publishing permissions section.
Have Certbot trust plugins.
Do not run snapcraft with sudo.
Upgrade snap to be based on core20
This PR makes several changes to be built on top of the core20 base snap. Fixes#7854.
The main changes are to `snapcraft.yaml`. With Snapcraft 4.0/core20 base, the python plugin is a thin wrapper, basically creating a `venv` and installing the packages from the source. The trouble with this is that it runs pycache, creating caches that conflict from the different parts. So to solve that, we put everything in a single part. Other changes include:
- We use classic confinement, so we need to specify a bunch of python packages to `stage-packages`, as mentioned [here](https://forum.snapcraft.io/t/trouble-bundling-python-with-classic-confinement-in-core20-4-0-4/18234/2).
- The certbot executable now lives in `bin`, so specify running `certbot/bin`.
- Since `python-augeas` is now being pulled into the single part, remove the pinning from constraints so we can use the latest version directly from github.
- Precompile our `cryptography` and `cffi` wheels to be based on python3.8.
Separately, we had to upgrade the snapcraft docker image to be based on focal, due to the thin wrapper situation. This was accomplished [here](https://github.com/adferrand/snapcraft/pull/1).
* Get rid of a whole bunch of error message
* Remove some more overlaps
* don't use certbot from nginx and apache
* use python3 from bin
* certbot needs to be in bin
* try to exclude just the certbot folder
* try a couple things to use the python from the venv bin
* play around with which versions of things we want from each package
* ok, certbot-nginx does need to stage bin
* certbot needs to not stage bin. why does certbot not put certbot in bin?
* fail to inspect more versions of things in the container shell
* take cffi backend from python-augeas
* if we use certbot from bin things should work?
* why is bin not in path? no idea, but let's get it compiled then inspect things in the snap shell
* use snap.certbot instead of bin/certbot
* it does require bin/certbot. I don't know why.
* let's see if we can stick it all in one step
* try installing local subdirectories
* move python-augeas into the single part
* remove after
* put back python-augeas part for now; ERROR: Could not satisfy constraints for 'python-augeas': installation from path or url cannot be constrained to a version
* how was this previously working without git installed? install git.
* maybe it needs to already have python3-wheel installed
* maybe wheel will install first if I change it to -e
* no -e
* maybe try a different python3 package to stage
* this last change wasn't necessary
* remove the bin/ from renew
* nope, it does need bin/certbot
* back to wget
* stage a bare python3
* add all necessary python packages to stage-packages
* pretty sure we don't actually need wheel. let's try removing it!
* remove python-augeas, since we have it pinned to an older version in cb-auto that might work
* stage augeas
* still need libaugeas-dev
* ok let's try building
* combining into one part works! just make sure to unpin python-augeas when generating snap-constraints.txt
* change our scripts to unpin python-augeas
* Use ubuntu 20 in compile_native_wheels.sh
* .travis.yml should use python3-dev instead of python-dev
* jk! we don't need python3-dev in travis
* Update cffi and cryptography wheels for ubuntu20 version of python
* looks like we need python3-dev to build things
* Remove deprecated i386 wheels
I initially added this when the script was doing things like migrating all LXD containers to the snap. I think the external side effects are now pretty minimal thought so I think we can remove the need for this environment variable which makes it easier to use outside of CI for manual testing.
* Tweaks for improved Cloudflare API
* Update docs for dns-cloudflare
* Update tests and changelog
* Fix bad merge
* Fix error code for record add
* Improve error message
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
This PR proposes a way to build the certbot snap for several architectures, using a QEMU-base emulation approach, and several optimizations to keep the build time of each snap below 20 minutes.
Most of the reasoning for the approach proposed here is described in the original PR: https://github.com/basak/certbot-snap-build/pull/27
On top of it, I added a docker pull to a pre-compiled snapcraft docker, instead of compiling it during the Travis pipeline, in order to save 5 to 7 minutes more on each snap build. The snap images are compiled and stored here: https://hub.docker.com/repository/docker/adferrand/snapcraft. Depending on the time the PR will be reviewed, we can:
* continue to use `adferrand/snapcraft`
* move its logic to certbot scope and use something like `certbot/snapcraft`
* wait for https://github.com/snapcore/snapcraft/pull/3144 to be merged, and use `snapcore/snapcraft`.
* Backport https://github.com/basak/certbot-snap-build/pull/27 into Certbot project
* Fix build deps
* Integrate proactively #8012 to fix builds on non-amd64 archs
* Configure jobs on Travis
* Focus on snap builds. Disable temporarily some jobs. Disable deploy actions by security.
* Specify TARGET_ARCH during snap build
* Do not do anything if TOXENV is not set
* Various optimizations
* Use recent version of ubuntu for get correct features on snap out of the box
* Add up to date wheels
* Organizing scripts
* Set dest dir
* Get back original configuration for Travis
* Add comments
* Update common_libs.sh
* Use adferrand/snapcraft
* Test build
* Stable snapcraft
* Update build_and_install.sh
* Move back snap builds to the cron/release pipeline
* Update snap/local/compile_native_wheels.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update snap/local/compile_native_wheels.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update snap/local/compile_native_wheels.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update snap/local/build_and_install.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Enable i386 builds, various optimizations
* Update dependencies
* Configure a simple http server to serve the pre compiled wheels
* Fix wheels compilation
* Relax permissions
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Add snap plugin support
Switch to a PoC branch of certbot that supports the new
CERTBOT_PLUGIN_PATH and wrap the snap to set this variable correctly
based on the content interfaces connected.
(cherry picked from commit 7076a55fd82116d068e2aca7239209b7203917d2)
* Modify certbot.wrapper to append to PYTHONPATH instead of separate CERTBOT_PLUGIN_PATH variable
* Update certbot-wrapper to python3.6 version
* add source field
* Update changelog
* Use bash instead of sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Exit if something goes wrong
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* No leading : when modifying empty PYTHONPATH
* Improve bash handling of PYTHONPATH manipulation
Co-authored-by: Robie Basak <robie.basak@canonical.com>
Co-authored-by: Brad Warren <bmw@eff.org>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
This PR gets its root from an observation I did on current version of Certbot (1.3.0): the `renewal-hooks` directory in Certbot configuration directory is created on Windows with write permissions to everybody.
I thought it was a critical bug since this directory contains hooks that are executed by Certbot, and you certainly do not want this folder to be open to any malicious hook that could be inserted by everyone, then executed with administrator privileges by Certbot.
Turns out for this specific problem that the bug is not critical for the hooks, because the scripts are expected to be in subdirectories of `renewal-hooks` (namely `pre`, `post` and `deploy`), and these subdirectories have proper permissions because we set them explicitly when Certbot is starting.
Still, there is a divergence here between Linux and Windows: on Linux all Certbot directories without explicit permissions have at maximum `0o755` permissions by default, while on Windows it is a `0o777` equivalent. It is not an immediate security risk, but it is definitly error-prone, not expected, and so a potential breach in the future if we forget about it.
Root cause is that umask is not existing in Windows. Indeed under Linux the umask defines the default permissions when you create a file or a directory. Python takes that into account, with an API for `os.open` and `os.mkdir` that expose a `mode` parameter with default value of `0o777`. In practice it is never `0o777` (either you the the `mode` explictly or left the default one) because the effective mode is masked by the current umask value in the system: on Linux it is `0o022`, so files/directories have a maximum mode of `0o755` if you did not set the umask explicitly, and it is what it is observed for Certbot.
However on Windows, the `mode` value passed (and got from default) to the `open` and `mkdir` of `certbot.compat.filesystem` module is taken verbatim, since umask does not exit, and then is used to calculate the DACL of the newly created file/directory. So if the mode is not set explicitly, we end up with files and directories with `0o777` permissions.
This PR fixes this problem by implementing a umask behavior in the `certbot.compat.filesystem` module, that will be applied to any file or directory created by Certbot since we forbid to use the `os` module directly.
The implementation is quite straight-forward. For Linux the behavior is not changed. On Windows a `mask` parameter is added to the function that calculates the DACL, to be invoked appropriately when file or directory are created. The actual value of the mask is taken from an internal class of the `filesystem` module: its default value is `0o755` to match default umasks on Linux, and can be changed with the new method `umask` that have the same behavior than the original `os.umask`. Of course `os.umask` becomes a forbidden function and `filesystem.umask` must be used instead.
Existing code that is impacted have been updated, and new unit tests are created for this new function.
* Implement umask for Windows
* Set umask at the beginning of tests
* Fix lint, update local oldest requirements
* Update certbot-apache/setup.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Improve tests
* Adapt filesystem.makedirs for Windows
* Fix
* Update certbot-apache/setup.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Changelog entries
* Fix lint
* Update certbot/CHANGELOG.md
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Issue #1123 discusses a feature that allows users to set the cipher
security level. That feature wasn't built. It didn't provide enough
user value to justify the corresponding increase in complexity. The
feature request and the associated discussion threads were closed.
However, the proposed API spec and the TODO section remained in the
cipher docs. They're a vestige of that issue from olden days and this PR
removes those last living traces...
Fixes#8027.
* Add support for NetBSD by telling certbot-nginx where the nginx
configuration directory is.
* Update the CHANGELOG.
* Pass the right type of sequence to "in". Thanks lint.
* Adjust the CHANGELOG.md entry following feedback from ohemorange.
Co-authored-by: Lloyd Parkes <lloyd@must-have-coffee.gen.nz>
In #7771, the Apache configurator gained the ability to identify what
version of OpenSSL Apache's ssl_module is linked against. However, the
detection was only functional if the module was built as a DSO (which is
almost always the case).
This commit covers the case where the ssl_module is statically linked
within the Apache binary. It requires the user to specify the path to
the binary (with --apache-bin) and emits a warning if static linking is
detected but no path has been provided.
This PR upgrades Certbot pinned dependencies through `letsencrypt-auto-source/rebuild_dependencies.py` while taking into account the problems detected in https://github.com/certbot/certbot/pull/8035:
* `cryptography` is pinned to `2.8` to continue to support OpenSSL 1.0.1 on non-x86 ancient Linux distributions (RHEL 6 + Debian 8)
* `parsedatetime` is pinned to `2.5` because of an incompatibility with Python 2.7 (see https://github.com/bear/parsedatetime/issues/246)
* `letsencrypt-auto-source/rebuild_dependencies.py` now takes into account the environment markers that are aded to `AUTHORITATIVE_CONSTRAINTS`: this is used for the `enum34` dependency, to not install it on Python 3.6+ and not break the distribution by swapping the built-in `enum` module during the setup of Certbot venv.
Fixes#8030
* Pin cryptography and parsedatetime
* Upgrade dependencies
* Remove authoritative constraint
* Upgrade dependencies
* Rebuild certbot-auto
* Update letsencrypt-auto-source/rebuild_dependencies.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Honor specific requirements in the AUTHORITATIVE_CONSTRAINTS
* Fix injection
* Update dependencies
* Update rebuild_dependencies.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#7988. As described there, the steps involved are:
1. Update our tests so they fail due to this problem.
2. Update the keys used in the tests so they pass with the new changes.
For 1, see a [failing travis run](https://travis-ci.com/github/certbot/certbot/jobs/340710511) with the included change. And for the full output to confirm that this is what is failing, see a [run on debian 10](https://github.com/certbot/certbot/files/4692350/debian_run_log.txt).
This PR adds `rsa4096_key.pem` and `rsa4096_cert.pem`, updates the `TLS-ALPN` test to use those keys in place of the 1024-bit versions, and fixes the README in that `testdata` folder with correct instructions to generate these files.
* export PIP_NO_BINARY in pip install subshell in test_sdists.sh
* set environment variable on the line that installs most packages
* Generate 4096-bit rsa key and cert, and fix README instructions to do so.
* Update TLS_ALPN test to use 4096-bit key instead of 1024-bit key.
* Update changelog
* Older versions of Python have an error when both VIRTUAL_NO_DOWNLOAD and PIP_NO_BINARY are set, so only apply the latter at the install phase.
* Add enum34 constraint manually, since rebuild_dependencies.py seems to be broken.
* only delete key if it exists
* Check OpenSSL version before trying to set PIP_NO_BINARY
* Add comment explaining why we only set PIP_NO_BINARY at the install step
* Add the content interface to Certbot
This commit contains a subset of the changes from 7076a55fd82116d068e2aca7239209b7203917d2.
* Normalise slot parameters
(cherry picked from commit 810941979bcf609c1e0be18e9263abf046b90e82)
Co-authored-by: Robie Basak <robie.basak@canonical.com>
Fixes#7667.
Implements the plan described in #7667.
Here's a terminal log showing that it does so:
```
# sudo snap connect certbot:plugin certbot-dns-dnsimple
error: cannot perform the following tasks:
- Run hook prepare-plug-plugin of snap "certbot" (run hook "prepare-plug-plugin":
-----
Only connect this interface if you trust the plugin author to have root on the system
Run `snap set certbot trust-plugin-with-root=ok` to acknowledge this and then run this command again to perform the connection
-----)
# snap set certbot trust-plugin-with-root=ok
# sudo snap connect certbot:plugin certbot-dns-dnsimple
# sudo snap disconnect certbot:plugin certbot-dns-dnsimple:certbot
# sudo snap connect certbot:plugin certbot-dns-dnsimple
error: cannot perform the following tasks:
- Run hook prepare-plug-plugin of snap "certbot" (run hook "prepare-plug-plugin":
-----
Only connect this interface if you trust the plugin author to have root on the system
Run `snap set certbot trust-plugin-with-root=ok` to acknowledge this and then run this command again to perform the connection
-----)
```
* Add plugin connection hook to accept root trust
* snapctl requires a configure hook to set options
* Add sh notice
* Update changelog
Fixes#7993
This PR uses `os.umask()` during `certbot.compat.filesystem.makedirs()` call to ensure that all directories, and not only the leaf one, have the provided `mode` when created. This ensures a safe and consistent behavior independently from the Python version, since the behavior of `os.makedirs` changed on that matter with Python 3.7.
* Implement logic to apply the same permission on all dirs created by makedirs
* Add a test
* Add comment
* Update certbot/certbot/compat/filesystem.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Error out in apache installer when mod_ssl is not available
* Update to MisconfigurationError and add/fix tests
* Remove error cases we no longer hit and associated test
* mock out function to have consistent error across machines
* improve changelog message
* only check key in modules list, not value
The error message from `python3 -m venv` when you don't have `python3-venv` installed is pretty good, but lets skip the failure and make sure it is installed the first time.
Related to #7649 since @joohoi needs a method to copy owner and permissions together from a source file to a destination file.
This PR creates the method `copy_ownership_and_mode()` in `certbot.compat.filesystem` module to achieve this goal. Its behavior is consistent across Linux and Windows in respect to the security model that have been defined for Certbot on Windows.
The method behaves globally the same than `copy_ownership_and_apply_mode`, but this time the permissions are extracted from the source file. For Windows it means that the DACL is copied from the source to the destination with the same content.
* Create copy_ownership_and_mode to copy both owner and mode from src to dst
* Update certbot/tests/compat/filesystem_test.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Fix docstring
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
So, setuptools broke the installation setup, by removing a deprecated API that is still used by some of our dependencies (see pypa/setuptools#2017)
This PR fixes the Docker build by using pipstrap to pin pip/setuptools/wheels, like it is done in several critical places (certbot-auto, ...).
An issue in certbot is opened to fix more generally the problem in most recent versions of setuptools: certbot/certbot#7976
It rebuilt locally all dockers (certbot + dns plugins) for the three architectures, and all have passed.
`tox -e cover` fails for me on macOS. This is due to the differences in the code that is run when on Linux vs. other platforms in `certbot.util` and its tests. Diffing my local coverage with Travis, the only difference in lines missing test coverage is:
```
$ diff travis.txt local.txt
< certbot/certbot/util.py 238 24 90% 132-134, 210, 275-281, 318, 330-340, 347, 381-382
> certbot/certbot/util.py 239 34 86% 30, 132-134, 210, 275-281, 301, 305, 316-318, 330-340, 347, 367-373, 381-382
< certbot/tests/util_test.py 392 0 100%
> certbot/tests/util_test.py 375 26 93% 483-487, 492-502, 507-514, 548-550, 555-557
```
I think tests on `master` should not be failing locally for people.
While there would be other ways to fix this by adding `# pragma: no cover` lines or writing mocked out tests for other platforms, I personally just think dropping the minimum coverage one percentage point is fine at least for now.
Coming out of the conversation at #7863 in the linked Google Doc, we should always have at least 1 release between updating one of our plugins to stop using a deprecated acme/certbot API and removing it from acme/certbot. Doing this gives the plugin changes time to propagate rather than potentially having the plugin break because Certbot was updated before the plugin had made the necessary changes.
This comment here should help ensure this.
* Add pytest warnings warning.
* clarify comment
Fixes#7713.
As discussed in #7713, providing a Powershell script as hook for Certbot is not working currently. This is because hooks are run in a `cmd` environment, that recognizes only `.bat` files as valid scripts that can be run from their bare name on command line.
On the other hand, the Powershell both `.bat` and `.ps1` scripts as valid scripts.
This PR makes hooks command be executed by Powershell, instead of `cmd` as `Popen` does by default when `shell=true` is used. It also modifies the tests to handle this new environment, in particular in term of encoding (UTF-16-LE is the default one in Powershell).
* Run hooks in powershell on Windows
* Fix hook test
* Fallback to unittest.mock
* In fact, shell_cmd as a list of str could not work. Declare only str as acceptable input for shell_cmd.
* Added changelog
Despite this PR (only) being ~200 lines containing mostly code copied from another repo, there is a lot going on here. For the sake of making it both easier to review and to remember some of these things in the future by referring back to this PR, I've documented a lot of noteworthy things with section headers below. With that said, it's probably not necessary to read each section unless you're interested in that topic.
The most noteworthy thing for the reviewer is **this PR should be merged and not squashed** to preserve authorship. To merge this code, once we're happy with this PR, I'll probably open a new PR squashing any commits I make in response in review comments back into a single commit to try to keep history somewhat clean. To help prevent this PR from being accidentally squashed, I'm making this a draft PR for now.
### Git history of https://github.com/basak/certbot-snap-build
I think it is worth preserving the git history of https://github.com/basak/certbot-snap-build that this PR is based on in this repo to help us track why things were done a certain way. To do this while keeping our git history somewhat clean, I took the approach described at https://stackoverflow.com/questions/1425892/how-do-you-merge-two-git-repositories/21495718#21495718 to move all history of https://github.com/basak/certbot-snap-build into a `snap` directory. I then squashed all commits so that sequential commits from the same author are one commit. I probably could have reordered commits to try and squash things a little more, but I personally don't think it's worth the trouble. Finally, I merged this rewritten history into this branch of the Certbot repo.
The contents of the `snap` directory are identical to the current contents of https://github.com/basak/certbot-snap-build before my final commit in this PR which makes the changes to make things work in this repo.
### Travis stages
This is described in general at https://docs.travis-ci.com/user/build-stages/, but I don't think we should deploy the snap if any of our tests are failing. To accomplish this, I created a "Snap" stage that builds, tests, and deploys the snap which is only executed after a "Test" stage that contains all of our other tests. The "Snap" stage will not run until the "Test" stage completes successfully.
### snap/local
This directory is ignored by `snapcraft` which I think makes it a good place to store `snap` specific scripts like `build_and_install.sh`.
See https://bugs.launchpad.net/snapcraft/+bug/1792203 for more info.
### Why remove certbot-compatibility-test from apacheconftest toxenvs?
Because it's not used. In theory, it could go in its own PR, but it'll create merge conflicts with this one so I'd personally prefer to include this simple change in this PR as well.
### Checklist for landing this PR
- [x] Squash all of my commits into one commit
- [x] Update the release instructions to have to move the snap to the beta channel
- [x] Shut down Robie's nightly builds probably by updating his repo to say that the code has moved here and deleting everything
* merge .gitignore
* Move snapcraft.yml up one level.
* update source
* move test.sh to tox.ini
* use new tox.ini in .travis.yml
* move snap build code
* make script executable
* remove unused python3-dev
* don't use deprecated classic flag
* go back to stable channel
* add nginx in snap addons
* add deploy steps
* Add comments explaining external tox envs.
* error if not in CI
* don't use --depth
* remove old .travis.yml
* Add big comment about SNAP_TOKEN.
* Set all_branches: true.
* Add repo setting.
* run travis on tags
* Add more documenting comments to .travis.yml.
* snap: move snapcraft.yaml to snap directory
Signed-off-by: Sergio Schvezov <sergio.schvezov@canonical.com>
* snap: use a local plugin to get around the delivered plugin
Add a plugin to the project which behaves as expected until a version
of snapcraft satisfies the project needs.
Additional snapcraft.yaml changes were made to accommodate for the snap
to build.
Signed-off-by: Sergio Schvezov <sergio.schvezov@canonical.com>
* snap: compile pycache in the last step for the last part
Signed-off-by: Sergio Schvezov <sergio.schvezov@canonical.com>
* Remove legacy Store upload credentials
These have not been needed since 5d7969a.
* Work around dev part dependency failure case
Get pip to install certbot from its VCS repository directly during the
build of the nginx and apache plugin parts.
This works around issue #12 when it affects the interaction between the
apache or nginx plugin and certbot itself.
It does not work around the case where the same problem occurs in the
interaction between certbot and acme. This looks harder to work around
because pip's VCS URL handling doesn't appear to include a facility to
install from a subdirectory of a git repository and this is where the
acme source is located.
* Switch to using lxd for the snapcraft run
The Docker image is 16.04 only. Before we can switch to 18.04, we need
to remove Docker, which means using the snapcraft snap using Travis' new
snap support, and then using the lxd functionality in snapcraft so that
it is enabled to build in the appropriate environment.
* Switch build to use core 18
Fixes: #14
* Move constraints into a list
This seems to be a requirement either either newer snapcraft, snapcraft
in lxd or in the move to core18 (it isn't clear to me which). This fixes
the error:
Failed to load plugin: properties failed to load for certbot: The 'constraints' property does not match the required schema: '$SNAPCRAFT_PART_SRC/constraints.txt' is not of type 'array'
* version-script -> snapcraftctl set-version
The use of version-script seems to break with either newer snapcraft,
snapcraft with lxd or core18 (it's not clear to me which). The breakage
is related to "parts/certbot/src" not being found. This can be fixed
with $SNAPCRAFT_PART_SRC, but this doesn't seem to be defined at
"version-script time".
However https://snapcraft.io/docs/deprecation-notices/dn10 deprecates
the use of version-script, and if we convert to the recommended new way,
then we use override-pull instead and $SNAPCRAFT_PART_SRC is defined
there, so this conveniently fixes both problems at once.
* Do not explicitly install snapd
Since this is now handled by the Travis addon, we do not need to do it
explicitly.
* Add Travis notifications
* Adjust automatic snap deployment configuration
Travis now has a documented[1] "snap" provider and the previous
experimental mechanism seems to have stopped working, presumably because
it was deprecated in favour of this new mechanism.
[1] https://docs.travis-ci.com/user/deployment/snaps/
* Configure for python3
* Update tests
* Use appropriate virtualenv
* Install nginx for the integration tests
* Try use LD_LIBRARY_PATH to find augeas shared library in snap when python-augeas is invoked
* Update travis to use build-in setup capabilities
* Update .travis.yml
* Add acme build
* Update tests
* Try more recent dist
* Update command
* Clean tests
* Add back augeas
* Add env
* Revert to last working snapcraft config
* Add a gitignore
* Reintegrate acme. Declare augeas in certbot parts
* Use release version of certbot
* Try new approach
* Fix config
* Directly install version of python-augeas from pypi
* Restart from basic
* Clone only once certbot repository. Use pinned versions of dependencies from certbot-auto.
* Try relatively to source
* Use snapcraft env variables
* Strip hashes
* Fix path
* Redefine path
* Continue to prepare the runtime
* Fix command line
* Update .travis.yml
* Add back certbot-apache
* Update snapcraft.yaml
* Build snap against the latest release of certbot
* Add renewal timer
* Install libaugeas0 in python-augeas part build
This part needs libaugeas0 to build.
* Bump to 0.26.1
* Always act directly on upstream master
I want to keep this always working, so move to master. We can
reintroduce upstream stable releases when we are ready for general use.
Closes: #5
That particular issue seems to no longer happen. Presumably something
changed in upstream git or in PyPI. If it happens again, hopefully I'll
have CI against upstream master up by then and I'll be able to pin it
down.
* Add empty Travis build
* Add Travis automatic snap edge publication
* Add integration test
This uses upstream's test suite from their source tree to check the
built snap to make sure it behaves as expected, before attempting upload
to the store.
* Point Augeas to its lens library
Augeas defaults to looking in /usr/share/augeas/lenses, which in a snap
isn't found at this path, but inside $SNAP. So set AUGEAS_LENS_LIB to
where the lenses can be found within the snap.
This fixes the Apache plugin that uses Augeas.
In #7925 we accidently changed the logic here. Before it was:
type = cron OR (type = push AND branch NOT IN (apache-parser-v2, master))
now it's
type = cron OR (type = push AND branch = master)
We want to be able to run our full test suite on things like test-* branches. The reason we had been excluding master is it has the full test suite run on it (through what Travis calls cron) nightly and not running on every commit helps prevent us on waiting on CI since our nightly tests spin up so many jobs.
This PR changes things back to the intended behavior.
(We could talk about changing the condition to just type = cron OR type = push if we want, but I'd rather do that in a separate PR once things like test- branches are fixed.)
Fixes#7268
I removed the reference to automatically selecting which ACME protocol we use, since at some point we'll want to rip out the non-spec-compliant ACMEv1 code.
Python 2 is going to get harder and harder to install locally so I don't think we should assume/require devs to have it installed.
This PR builds on #7905 so our developer guide only has people use Python 3.
Part of #7886.
This PR conditionally installs `mock` in `certbot-dns-*/setup.py` based on setuptools version and python version, when possible. It then updates the tests to use `unittest.mock` when `mock` isn't available.
* Do not require mock in Python 3 in certbot-dns modules
* update changelog
* error when trying to build wheels with old setuptools
* add type: ignores
Part of #7886.
This PR conditionally installs mock in `apache/setup.py` based on setuptools version and python version, when possible. It then updates `apache` tests to use `unittest.mock` when `mock` isn't available.
* Conditionally install mock in apache
* error out on newer python and older setuptools
* error when trying to build wheels with old setuptools
* use unittest.mock when third-party mock isn't available in apache, with no cover and type ignore
This PR is exactly the same as #7895, but know we know a little bit more about what was going on with `mypy`.
Part of #7886.
This PR conditionally installs mock in `certbot/setup.py` based on setuptools version and python version, when possible. It then updates `certbot` tests to use `unittest.mock` when `mock` isn't available.
* Conditionally install mock in certbot
* use unittest.mock when third-party mock isn't available in certbot
* Add type:ignores because of https://github.com/python/mypy/issues/1153
* error out on newer python and older setuptools
* error when trying to build wheels with old setuptools
Part of #7886.
This PR conditionally installs mock in `acme/setup.py` based on setuptools version and python version, when possible. It then updates `acme` tests to use `unittest.mock` when `mock` isn't available.
Now with `type: ignore` as appropriate. Once the "future steps" of #7886 are finished, and mypy is on Python 3, the `pragma no cover`s and `type ignore`s will be gone.
* Conditionally install mock in acme
* error out on newer python and older setuptools
* error when trying to build wheels with old setuptools
* use unittest.mock when third-party mock isn't available in acme, with no cover and type ignore
This PR fixes the Travis failures that can be seen https://travis-ci.com/certbot/certbot/builds/160258644. Running the tests locally, it looks like Ubuntu has started shutting down the 19.04 repos which makes sense as this release has been EOL'd. See https://wiki.ubuntu.com/Releases.
I have the full suite including the test farm tests running at https://travis-ci.com/github/certbot/certbot/builds/160269969 with this change.
The issue of adding 19.10 to our test farm tests is tracked by #7851. I think that issue is important and it's in our current milestone, but I'd personally rather get our tests passing for now and try to expand them to run on other systems later.
* Revert "Do not require mock in Python 3 in certbot module (#7895)"
This reverts commit 77871ba71c.
* Revert "Do not require mock in Python 3 in acme module (#7894)"
This reverts commit cd0acf5dcc.
Part of #7886.
This PR conditionally installs mock in `certbot/setup.py` based on setuptools version and python version, when possible. It then updates `certbot` tests to use `unittest.mock` when `mock` isn't available.
* Conditionally install mock in certbot
* use unittest.mock when third-party mock isn't available in certbot
* Add type:ignores because of https://github.com/python/mypy/issues/1153
* error when trying to build wheels with old setuptools
Part of #7886.
This PR conditionally installs mock in acme/setup.py based on setuptools version and python version, when possible. It then updates acme tests to use unittest.mock when mock isn't available.
* Conditionally install mock in acme
* use unittest.mock when third-party mock isn't available in acme
* error when trying to build wheels with old setuptools
* Fix dangerous default argument
* Remove unused imports
* Remove unnecessary comprehension
* Use literal syntax to create data structure
* Use literal syntax instead of function calls to create data structure
Co-authored-by: deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com>
Fixes#7857.
* stop using urllib2 in test farm tests
* use six for urllib instead
* remove fabric lcd usage
* correct lcd removal
* remove fabric cd
* convert some remote calls to v2
* move more cxns to v2
* get run working with prefix
* get sudo commands working
* remove final fabric v1 references including local
* update requirements and README
* add new venv to gitignore
* update version used in travis
* remove deploy_script unused kwargs
* fix killboulder implementation so I can test creating a new boulder server
* hardcode the gopath due to broken env manamagement in fabric2
* Update letstest readme
* move the comment about hardcoding the ggopath
* catch BaseException instead of Exception
* work around fabric #2007
* use connections as context managers to ensure they're closed
* remove reference to virtualenv
Translate a proxy specified by an environment variable ("http_proxy"
or "HTTP_PROXY") into options recognized by "openssl ocsp". Support
is limited to HTTP proxies which don't require authentication.
Fixes#6150
Fixes#7875 .
After [this comment](https://github.com/certbot/certbot/issues/7875#issuecomment-608145208) and evaluating the options, I opted to go with `stricttextualmsg`, as required by RFC 8555. Reasoning is that the ACME v1 code path (via OpenSSL) produces a `fullchain_pem` which satisfies `stricttextualmsg`, so we don't need to be more generous than that.
One downside of the `re` approach is that it doesn't seem capable of capturing repeating group matches. As a result, it matches each certificate individually, silently passing over any data in between the encapsulation boundaries, such as explanatory text, which is prohibited by RFC 8555.
It would be ideal to raise an error when encountering such a non-conformant chain, but we'd need to create a mini-parser to do it, I think.
* Fix fullchain parsing for CRLF chains.
fullchain parsing now works in two passes:
1. A first pass which is generous with what it accepts - basically
preeb(CERTIFICATE)+anything+posteb(CERTIFICATE). This determines
the boundaries for each certificate.
2. A second pass which normalizes (by parsing and re-encoding) each
certificate found in the first pass.
* typo in docstring
* remove redundant group in regex
* can't use assertRaisesRegex until py27 is gone
* acme: socket timeout for HTTP standalone servers
Adds a default 30 second timeout to the StreamRequestHandler for clients
connecting to standalone HTTP-01 servers. This should prevent most cases
of an idle client connection from preventing the standalone server from
shutting down.
Fixes#7386
* use idiomatic kwargs default value
* move HTTP01Server lower to fix mypy forward ref.
* fix test crash on macOS due to socket double-close
* maybe its not an OSError?
* disable coverage check on useless branch
Fixes#7594.
Removes the code asking interactively if the user would like to add a redirect.
* Remove interactive redirect ask
* display.enhancements is no longer used, so remove it.
* update changelog
* remove references to removed display.enhancements
* add redirect_default flag to enhance_config to conditionally set default for redirect value
* Update default in help text.
* Load apacheconfig dependency, gate behind flag
* Bump apacheconfig dependency to latest version and install dev version of apache for coverage tests
* Move augeasnode_test tests to more generic parsernode_test
* Revert "Move augeasnode_test tests to more generic parsernode_test"
This reverts commit 6bb986ef786b9d68bb72776bde66e6572cf505a9.
* Mock AugeasNode into DualNode's place, and run augeasnode tests exclusively on AugeasNode
* Don't calculate coverage for skeleton functions
* clean up helper function in augeasnode_test
Fixes#7350.
This PR changes the parsed modules from a `set` to a `dict`, with the filepath argument as the value. Accordingly, after calling `enable_mod` to enable `ssl_module`, modules now need to be re-parsed, so call `reset_modules`.
* Add mechanism for selecting apache config file, based on work done in #7191.
* Check OpenSSL version
* Remove os imports
* debian override still needs os
* Reformat remaining apache tests with modules dict syntax
* Clean up more apache tests
* Switch from property to method for openssl and add tests for coverage.
* Sometimes the dict location will be None in which case we should in fact return None
* warn thoroughly and consistently in openssl_version function
* update tests for new warnings
* read file as bytes, and factor out the open for testing
* normalize ssl_module_location path to account for being relative to server root
* Use byte literals in a python 2 and 3 compatible way
* string does need to be a literal
* patch builtins open
* add debug, remove space
* Add test to check if OpenSSL detection is working on different systems
* fix relative test location for cwd
* put </IfModule> on its own line in test case
* Revert test file to status in master.
* Call augeas load before reparsing modules to pick up the changes
* fix grep, tail, and mod_ssl location on centos
* strip the trailing whitespace from fedora
* just use LooseVersion in test
* call apache2ctl on debian systems
* Use sudo for apache2ctl command
* add check to make sure we're getting a version
* Add boolean so we don't warn on debian/ubuntu before trying to enable mod_ssl
* Reduce warnings while testing by setting mock _openssl_version.
* Make sure we're not throwing away any unwritten changes to the config
* test last warning case for coverage
* text changes for clarity
This PR builds on #7657 and cleans up additional unnecessary pylint comments and some stray comments referring to pylint: disable comments that have been deleted that I didn't notice in my review of that PR.
* Remove stray pylint link.
* Cleanup more pylint comments
* Cleanup magic_typing imports
* Remove unneeded pylint: enable comments
This PR is an alternative to #7125.
Instead of disabling the strict mode on Pebble, this PR fixes the JWS payloads regarding RFC 8555 to be compliant, and allow certbot to work with Pebble v2.1.0+.
* Fix acme compliance to RFC 8555.
* Working mixin
* Activate back pebble strict mode
* Use mixin for type
* Update dependencies
* Fix also in fields_to_partial_json
* Update pebble
* Add changelog
This PR is the first part of work described in #6724.
It reintroduces the tls-alpn-01 challenge in `acme` module, that was introduced by #5894 and reverted by #6100. The reason it was removed in the past is because some tests showed that with `1.0.2` branch of OpenSSL, the self-signed certificate containing the authorization key is sent to the requester even if the ALPN protocol `acme-tls/1` was not declared as supported by the requester during the TLS handshake.
However recent discussions lead to the conclusion that this behavior was not a security issue, because first it is coherent with the behavior with servers that do not support ALPN at all, and second it cannot make a tls-alpn-01 challenge be validated in this kind of corner case.
On top of the original modifications given by #5894, I merged the code to be up-to-date with our `master`, and fixed tests to match recent evolution about not displaying the `keyAuthorization` in the deserialized JSON form of an ACME challenge.
I also move the logic to verify if ALPN is available on the current system, and so that the tls-alpn-01 challenge can be used, to a dedicated static function `is_available` in `acme.challenge.TLSALPN01`. This function is used in the related tests to skip them, and will be used in the future from Certbot plugins to trigger or not the logic related to tls-alpn-01, depending on the OpenSSL version available to Python.
* Reimplement TLS-ALPN-01 challenge and standalone TLS-ALPN server from #5894.
* Setup a class method to check if tls-alpn-01 is supported.
* Add potential missing parameter in validation for tls-alpn
* Improve comments
* Make a class private
* Handle old versions of openssl that do not terminate the handshake when they should do.
* Add changelog
* Explicitly close the TLS connection by the book.
* Remove unused exception
* Fix lint
Fixes#7835
I had to mock out `get_serial_from_cert` to keep a test from failing, because `cert_path` was mocked itself in `test_report_human_readable`.
Also, I kept the same style for the serial number as the recent Let's Encrypt e-mail: lowercase hexadecimal without a `0x` prefix and without colons every 2 chars. Shouldn't be a problem to change the format if required.
Fixes#5484
This PRs makes Certbot expose two new environment variables in the auth and cleanup hooks of the `manual` plugin:
* `CERTBOT_REMAINING_CHALLENGES` contains the number of challenges that remain after the current one (so it equals to 0 when the script is called for the last challenge)
* `CERTBOT_ALL_DOMAINS` contains a comma-separated list of all domains concerned by a challenge for the current certificate
With these variables, an hook script can know when it is run for the last time, and then trigger appropriate finalizers for all challenges that have been executed. This will be particularly useful for certificates with a lot of domains validated with DNS-01 challenges: instead of waiting on each hook execution to check that the relevant DNS TXT entry has been inserted, these waits can be avoided thanks to the latest hook verifying all domains in one run.
* Inject environment variables in manual scripts about remaining challenges
* Adapt tests
* Less variables and less lines
* Update manual.py
* Update manual_test.py
* Add documentation
* Add changelog
Fixes#1028.
Doing this now because of https://community.letsencrypt.org/t/revoking-certain-certificates-on-march-4/.
The new `ocsp_revoked_by_paths` function is taken from https://github.com/certbot/certbot/pull/7649 with the optional argument removed for now because it is unused.
This function was added in this PR because `storage.py` uses `self.latest_common_version()` to determine which certificate should be looked at for determining renewal status at 9f8e4507ad/certbot/certbot/_internal/storage.py (L939-L947)
I think this is unnecessary and you can just look at the currently linked certificate, but I don't think we should be changing the logic that code has always had now.
* Check OCSP status as part of determining to renew
* add integration tests
* add ocsp_revoked_by_paths
Certificates are public information by design: they are provided by
web servers without any prior authentication required. In a public
key cryptographic system, only the private key is secret information.
The private key file is already created as accessible only to the root
user with mode 0600, and these file permissions are set before any key
content is written to the file. There is no window within which an
attacker with access to the containing directory would be able to read
the private key content.
Older versions of Certbot (prior to 0.29.0) would create private key
files with mode 0644 and rely solely on the containing directory
permissions to restrict access. We therefore cannot (yet) set the
relevant default directory permissions to 0755, since it is possible
that a user could install Certbot, obtain a certificate, then
downgrade to a pre-0.29.0 version of Certbot, then obtain another
certificate. This chain of events would leave the second
certificate's private key file exposed.
As a compromise solution, document the fact that it is safe for the
common case of non-downgrading users to change the permissions of
/etc/letsencrypt/{live,archive} to 0755, and explain how to use chgrp
and chmod to make the private key file readable by a non-root service
user.
This provides guidance on the simplest way to solve the common problem
of making keys and certificates usable by services that run without
root privileges, with no requirement to create a custom (and hence
error-prone) executable hook.
Remove the existing custom executable hook example, so that the
documentation contains only the simplest and safest way to solve this
very common problem.
Signed-off-by: Michael Brown <mbrown@fensystems.co.uk>
After getting a +1 from everyone on the team, this PR removes the use of `codecov` from the Certbot repo because we keep having problems with it.
Two noteworthy things about this PR are:
1. I left the text at 4ea98d830b/.azure-pipelines/INSTALL.md (add-a-secret-variable-to-a-pipeline-like-codecov_token) because I think it's useful to document how to set up a secret variable in general.
2. I'm not sure what the text "Option -e makes sure we fail fast and don't submit to codecov." in `tox.cover.py` refers to but it seems incorrect since `-e` isn't accepted or used by the script so I just deleted the line.
As part of this, I said I'd open an issue to track setting up coveralls (which seems to be the only real alternative to codecov) which is at https://github.com/certbot/certbot/issues/7810.
With my change, failure output looks something like:
```
$ tox -e py27-cover
...
Name Stmts Miss Cover Missing
------------------------------------------------------------------------------------------
certbot/certbot/__init__.py 1 0 100%
certbot/certbot/_internal/__init__.py 0 0 100%
certbot/certbot/_internal/account.py 191 4 98% 62-63, 206, 337
...
certbot/tests/storage_test.py 530 0 100%
certbot/tests/util_test.py 374 29 92% 211-213, 480-484, 489-499, 504-511, 545-547, 552-554
------------------------------------------------------------------------------------------
TOTAL 14451 647 96%
Command '['/path/to/certbot/dir/.tox/py27-cover/bin/python', '-m', 'coverage', 'report', '--fail-under', '100', '--include', 'certbot/*', '--show-missing']' returned non-zero exit status 2
Test coverage on certbot did not meet threshold of 100%.
ERROR: InvocationError for command /Users/bmw/Development/certbot/certbot/.tox/py27-cover/bin/python tox.cover.py (exited with code 1)
_________________________________________________________________________________________________________________________________________________________ summary _________________________________________________________________________________________________________________________________________________________
ERROR: py27-cover: commands failed
```
I printed the exception just so we're not throwing away information.
I think it's also possible we fail for a reason other than the threshold not meeting the percentage, but I've personally never seen this, `coverage report` output is not being captured so hopefully that would inform devs if something else is going on, and saying something like "Test coverage probably did not..." seems like overkill to me personally.
* remove codecov
* remove unused variable group
* remove codecov.yml
* Improve tox.cover.py failure output.
Related to https://github.com/certbot/certbot/pull/7482, this removes some references to deprecated options in Certbot.
The only references I didn't remove were:
* In `certbot/tests/testdata/sample-renewal*` which contains a lot of old values and I think there's even some value in keeping them so we know if we make a change that suddenly causes old renewal configuration files to error.
* In the Apache and Nginx plugins and I created https://github.com/certbot/certbot/issues/7508 to resolve that issue.
If you run `mypy --platform darwin certbot/certbot/util.py` you'll get:
```
certbot/certbot/util.py:303: error: Name 'distro' is not defined
certbot/certbot/util.py:319: error: Name 'distro' is not defined
certbot/certbot/util.py:369: error: Name 'distro' is not defined
```
This is because mypy's logic for handling platform specific code is pretty simple and can't figure out what we're doing with `_USE_DISTRO` here. See https://mypy.readthedocs.io/en/stable/common_issues.html#python-version-and-system-platform-checks for more info.
Setting `_USE_DISTRO` to the result of `sys.platform.startswith('linux')` solves the problem without changing the overall behavior of our code here though.
This fixes part of https://github.com/certbot/certbot/issues/7803, but there's more work to be done on Windows.
I don't fully understand why, but since I updated my macbook to macOS Catalina, the test script currently fails to run for me with the versions of our dependencies we have pinned. Updating the dependencies solves the problem though and you can see Travis also successfully running tests with these new dependencies at https://travis-ci.com/certbot/certbot/builds/150573696.
I want to do what I did in https://github.com/certbot/certbot/pull/7733 to our Azure Pipelines setup, but unfortunately this isn't currently possible. The only filters available for service hooks for the "build completed" trigger are the pipeline and build status. See

To accomplish this, I propose splitting the "advanced" pipeline into two cases. One is for builds on protected branches where we want to be notified if they fail while the other is just used to manually run tests on certain branches.
* Refactor cli.py into a package with submodules
* Added unit tests for helpful module in cli.
* Fixed linter errors
* Fixed pylint issues
* Updated changelog.md
* Fixed test failing and mypy error. Appeared a new pylint error (seems to be in conflict with mypy)
mypy require zope.interface to be imported but when imported it is not used and pylint throws an error.
* Fixed pylint errors
* Apply changes to cli since last merge from master (efc8d49806)
* Fix lint
* Remaining lint errors
Co-authored-by: Adrien Ferrand <adferrand@users.noreply.github.com>
This fixes (part of) the problem identified in https://github.com/certbot/certbot/pull/7657#issuecomment-586506340.
When I tested our pylint setup on Python 3.5.9, 3.6.9, or 3.6.10, tests failed with:
```
************* Module acme.challenges
acme/acme/challenges.py:57:15: E1101: Instance of 'UnrecognizedChallenge' has no 'jobj' member (no-member)
************* Module acme.jws
acme/acme/jws.py:28:16: E1101: Class 'Signature' has no '_orig_slots' member (no-member)
```
These errors did not occur for me on Python 3.6.7 or Python 3.7+.
You also cannot run our lint setup on Python 2.7 because our pinned version of pylint's dependency `asteroid` does not support Python 2. Because of this, `pylint` is not installed in the virtual environment created by `tools/venv.py` and our [`lint` environment in tox specifies that Python 3 should be used](fd64c8c33b/tox.ini (L132)).
I tried updating pylint and its dependencies to fix the problem, but they still occur so I think adding back these disable checks on these lines again is the best fix for now.
As pylint is evolving, it improves its accuracy, and several pylint error suppression (`# pylint: disable=ERROR) added in certbot codebase months or years ago are not needed anymore to make it happy.
There is a (disabled by default) pylint error to detect the useless suppressions (pylint-ception: `useless-suppression`). It is not working perfectly (it has also false-positives ...) but it is a good start to clean the codebase.
This PR removes several of these useless suppressions as detected by the current pylint version we use.
* Remove useless suppress
* Remove useless lines
These tests failed at https://travis-ci.com/certbot/certbot/jobs/285202481 but do not include any output from the script about what went wrong because the string created from `subprocess.CalledProcessError` does not include value of output.
This PR fixes that by printing these values which `pytest` will include in the output if the test fails.
We should move ocsp.py to public API, as an upcoming OCSP prefetching functionality in Apache plugin relies on it, and as the plugins are note released in lockstep with the Certbot core, we need to be careful when changing those APIs.
* Move ocsp.py to public api
* Fix type annotations, move to pointing to an interface and fix linting
* Add certbot.ocsp to documentation table of contents
* Modify tests to reflect the changes in ocsp.py
* Add changelog entry
* Fix notAfter mock for tests
After a brief discussion in Mattermost, I shut down letsencrypt.readthedocs.io. Turns out we were linking to it in our README here so let's remove the broken link.
I didn't update the link to point to one of the readthedocs projects we still have because are main Certbot docs are self-hosted rather than being on readthedocs.
Currently if you go to https://certbot.eff.org/docs/api/certbot.crypto_util.html, there is a todo comment displayed at the top of the page. These todos were written for developers, not users, so I do not think they should be shown from our documentation.
This PR makes the quick and easy fix of configuring Sphinx not to show these todo items. I created #7752 to track removing all of these todos from our docstrings and disabling the Sphinx todo extension.
* Set todo_include_todos=False in sphinx-quickstart
* Remove todos from existing docs.
As discussed in #7539, we need proper tests of the Windows installer itself in order to variety that all the logic contained in a production-grade runtime of Certbot on Windows is correctly setup by each version of the installer, and so for a variety of Windows OSes.
This PR handles this requirement. The new `windows_installer_integration_tests` module in `certbot-ci` will:
* run the given Windows installer
* check that Certbot is properly installed and working
* check that the scheduled renew task is set up
* check that the scheduled task actually launch the Certbot renew logic
The Windows nightly tests are updated accordingly, in order to have the tests run on Windows Server 2012R2, 2016 and 2019.
These tests will evolve as we add more logic on the installer.
* Configure an integration test testing the windows installer
* Write the test module
* Configurable installer path, prepare azure pipelines
* Fix option
* Update test_main.py
* Add confirmation for this destructive test
* Use regex to validate certbot --version output
* Explicit dependency on a log output
* Use an exception to ask confirmation
* Use --allow-persistent-changes
When I want to manually run the full test suite to test something, I've been manually deleting our notification setup from `.travis.yml` to avoid spamming IRC with my personal test failures.
This PR sets this behavior up to happen automatically by turning off IRC notifications in test branches. You can see this working by noticing the IRC notification section in the bottom of the config for this PR at https://travis-ci.com/certbot/certbot/builds/146827907/config and the fact that it is absent from a `test-` branch based on this one at https://travis-ci.com/certbot/certbot/jobs/282059094/config.
A while ago Cloudflare added support for limited-scope API Tokens in place of using a global API key, but support for them in cloudflare/python-cloudflare took a while to get through.
In summary, this PR:
- Implements token functionality through the INI file parameter `dns_cloudflare_api_token` (in addition to the traditional `dns_cloudflare_email` and `dns_cloudflare_api_key`). This needed a more advanced parameter validator than the built in `required_variables` mechanism.
- Updates the docs to reflect the new option, needed token permissions, and version details of the `cloudflare` module
* Update python-cloudflare version
* Add Cloudflare API Token support to certbot-dns-cloudflare
* Add token-specific errors to certbot-dns-cloudflare
* Tidy up certbot-dns-cloudflare
* Implement Cloudflare API Tokens in testing for certbot-dns-cloudflare(needs work)
* Further tidying of certbot-dns-cloudflare
* Update CHANGELOG with Cloudflare API Tokens implementation
* Improve testing of certbot-dns-cloudflare
* Improve certbot-dns-cloudflare test formatting
* Further improve testing for certbot-dns-cloudflare
* Change needed permissions for token
* Add documentation regarding python-cloudflare version
* Fix changelog, references to python-cloudflare and docs
* Fix behaviour when domain does not match cloudflare root domain. Improve error handling.
* Improve testing
* Improve hints and error handling
Part of #7204.
Makes the smaller changes described at https://github.com/certbot/certbot/issues/7204#issuecomment-571838185 to disable many old ciphersuites and TLS versions < 1.2. Does not add checks for OpenSSL version or modify session tickets.
Since Apache uses TLS protocol blacklisting instead of whitelisting (as in NGINX), we additionally may not need to determine if the server supports TLS1.3 and turn it on or off based on Apache version.
* Update SSL versions and ciphersuites based on Mozilla intermediate recommendations for apache
* Update constants with hashes of new config files
* Update changelog
As mentioned in https://github.com/certbot/certbot/pull/7712#discussion_r370419867, it's time to remove this ciphersuite now that Windows 2008 R2 and Windows 7 are EOLed.
* Remove ECDHE-RSA-AES128-SHA from NGINX ciphers list to celebrate Windows 2008 R2 deprecation
* Update changelog
It looks like we're currently documenting functions that are marked private (prefixed with an underscore) such as https://certbot.eff.org/docs/api/certbot.crypto_util.html#certbot.crypto_util._load_cert_or_req. I do not think we should do this because the functionality is private, should not be used, and including it in our docs just adds visual noise.
This PR stops us from documenting private code and fixes up `tools/sphinx-quickstart.sh` so we don't document it in future modules.
* Do not document private code.
* Don't document private members in the future.
Fixes#7110
This PR declares docker-compose as a requirement for certbot-ci. This way, a recent version of docker-compose is installed in the standard virtual environment set up by `tools/venv.py` and `tools/venv3.py`, and so is available to pytest integration tests from `tox` or in the virtual environment enabled.
* Add docker-compose as a dev dependency and declares it in certbot-ci requirements
* Update docker-compose 1.25.0
This PRs extends the installer tests on Azure Pipeline, in order to run the integration tests on a certbot instance installed with the Windows installer for several Windows versions, corresponding to the scope of supported versions on Certbot:
* Windows Server 2012 R2
* Windows Server 2016
* Windows Server 2019
One can see the result on: https://dev.azure.com/adferrand/certbot/_build/results?buildId=311
* Try specific installer-build step
* Install Python manually
* Add tests on windows 2019
Part of #7550
This PR makes appropriate corrections to run pylint on Python 3.
Why not keeping the dependencies unchanged and just run pylint on Python 3?
Because the old version of pylint breaks horribly on Python 3 because of unsupported version of astroid.
Why updating pylint + astroid to the latest version ?
Because this version only fixes some internal errors occuring during the lint of Certbot code, and is also ready to run gracefully on Python 3.8.
Why upgrading mypy ?
Because the old version does not support the new version of astroid required to run pylint correctly.
Why not upgrading mypy to its latest version ?
Because this latest version includes a new typshed version, that adds a lot of new type definitions, and brings dozens of new errors on the Certbot codebase. I would like to fix that in a future PR.
That said so, the work has been to find the correct set of new dependency versions, then configure pylint for sane configuration errors in our situation, disable irrelevant lintings errors, then fixing (or ignoring for good reason) the remaining mypy errors.
I also made PyLint and MyPy checks run correctly on Windows.
* Start configuration
* Reconfigure travis
* Suspend a check specific to python 3. Start fixing code.
* Repair call_args
* Fix return + elif lints
* Reconfigure development to run mainly on python3
* Remove incompatible Python 3.4 jobs
* Suspend pylint in some assertions
* Remove pylint in dev
* Take first mypy that supports typed-ast>=1.4.0 to limit the migration path
* Various return + else lint errors
* Find a set of deps that is working with current mypy version
* Update local oldest requirements
* Remove all current pylint errors
* Rebuild letsencrypt-auto
* Update mypy to fix pylint with new astroid version, and fix mypy issues
* Explain type: ignore
* Reconfigure tox, fix none path
* Simplify pinning
* Remove useless directive
* Remove debugging code
* Remove continue
* Update requirements
* Disable unsubscriptable-object check
* Disable one check, enabling two more
* Plug certbot dev version for oldest requirements
* Remove useless disable directives
* Remove useless no-member disable
* Remove no-else-* checks. Use elif in symetric branches.
* Add back assertion
* Add new line
* Remove unused pylint disable
* Remove other pylint disable
A lot of Certbot's files don't have API documentation which is fixed by this PR. To do this, from the top level certbot directory I ran:
```
sphinx-apidoc -Me -o docs/api certbot
```
I then merged the resulting `modules.rst` file with `docs/api.rst`.
The old plugin at https://github.com/marcan/certbot-external says it's obsolete and points people to https://github.com/EnigmaBridge/certbot-external-auth. The new plugin is also an installer.
I also removed the reference to #2782 about us adding similar functionality since that's been done for a long time. We could reference our manual plugin instead, but I think that devalues their plugin a bit which I don't think is necessary or correct as it has different features.
Current version of pywin32 used in certbot (225) does not have wheels available for Python 3.8. Installing certbot for development in this case requires to build from source. On Windows, this implies a Visual Studio C++ environment up and ready, which is absolutely not fun.
Let's upgrade to pywin32 227, that provides these wheels for all Python versions from 3.5 up to current dev status of 3.9.
This PR adds appropriate corrections to certbot dockerfile to work with the new layout (moving certbot python project in its own subdirectory).
This PR has been tested with success using the `build.sh` script on a fake v1.0 version of certbot published on my fork (https://github.com/adferrand/certbot/releases/tag/v1.0) instead of archives from `certbot/certbot`.
This is my proposed fix for #7540. I would ideally like this to be included in our 1.0 release.
I came up with this design by adding all attributes used either in our own plugins, 3rd party plugins listed at https://certbot.eff.org/docs/using.html#third-party-plugins, or our public API code.
Despite me thinking that zope is unneeded nowadays, I initially tried to use it to define this interface since we have it and it gives us a way to define expected attributes, but it doesn't work because zope interface objects also have a method called `names` which conflict with the API.
I talked about this with Adrien out of band and did some of my own research and there are some minor benefits with this new approach of using properties:
1. It's more conventional.
2. If you also change the implementation to inherit from the class, Python will error if all properties aren't defined.
3. The PEP 526 style type annotations with mypy seem to (currently) only be used to validate code using the class, not the class implementation itself. You can add a type annotation saying the class needs to have this attribute, never define it, and mypy won't complain.
With this new approach, I had to fix `names` because pylint was complaining that the arguments differed, however, we never used the optional parameter to `names` outside of tests so I just deleted the code altogether.
* fixes#7540
* move to properties
* Move acme tests to tests/ directory outside of acme module
* Fix call to messages_test in client_test
* Move test_util.py and testdata/ into tests/
* Update manifest to package tests
* Exclude pycache and .py[cod]
* Refactor tests out of module for certbot-dns-cloudflare
* Refactor tests out of module for certbot-dns-cloudxns
* Refactor tests out of module for certbot-dns-digitalocean
* Refactor tests out of module for certbot-dns-dnsimple
* Refactor tests out of module for certbot-dns-dnsmadeeasy
* Refactor tests out of module for certbot-dns-gehirn
* Refactor tests out of module for certbot-dns-google
* Refactor tests out of module for certbot-dns-linode
* Refactor tests out of module for certbot-dns-luadns
* Refactor tests out of module for certbot-dns-nsone
* Refactor tests out of module for certbot-dns-ovh
* Refactor tests out of module for certbot-dns-rfc2136
* Refactor tests out of module for certbot-dns-sakuracloud
* Refactor tests out of module for certbot-dns-route53
* Move certbot-dns-google testdata/ under tests/
* Use pytest for dns plugins
* Exclude pycache and .py[cod]
Clean up some places missed by #7544.
Found this when running test farm tests. They were working as of 5d90544, and I will truly shocked if subsequent changes (all to the windows installer) made them stop working.
* Release script needs to target new CHANGELOG location
* Clean up various other CHANGELOG path references
* Update windows paths for new certbot location
* Add certbot to packages list for windows installer
* Update pull_request_template.md
* Remove line breaks
Github seems to be keeping the line breaks rather than ignoring them, making it be formatted weirdly, so remove them.
Part of #5775.
* Create _internal folder certbot-nginx
* Move configurator.py to _internal
* Move constants.py to _internal
* Move display_ops.py to _internal
* Move http_01.py to _internal
* Move nginxparser.py to _internal
* Move obj.py to _internal
* Move parser_obj.py to _internal
* Move parser.py to _internal
* Update location and references for tls_configs
* exclude parser_obj from coverage
Summary of changes in this PR:
- Refactor files involved in the `certbot` module to be of a similar structure to every other package; that is, inside a directory inside the main repo root (see below).
- Make repo root README symlink to `certbot` README.
- Pull tests outside of the distributed module.
- Make `certbot/tests` not be a module so that `certbot` isn't added to Python's path for module discovery.
- Remove `--pyargs` from test calls, and make sure to call tests from repo root since without `--pyargs`, `pytest` takes directory names rather than package names as arguments.
- Replace mentions of `.` with `certbot` when referring to packages to install, usually editably.
- Clean up some unused code around executing tests in a different directory.
- Create public shim around main and make that the entry point.
New directory structure summary:
```
repo root ("certbot", probably, but for clarity all files I mention are relative to here)
├── certbot
│ ├── setup.py
│ ├── certbot
│ │ ├── __init__.py
│ │ ├── achallenges.py
│ │ ├── _internal
│ │ │ ├── __init__.py
│ │ │ ├── account.py
│ │ │ ├── ...
│ │ ├── ...
│ ├── tests
│ │ ├── account_test.py
│ │ ├── display
│ │ │ ├── __init__.py
│ │ │ ├── ...
│ │ ├── ... # note no __init__.py at this level
│ ├── ...
├── acme
│ ├── ...
├── certbot-apache
│ ├── ...
├── ...
```
* refactor certbot/ and certbot/tests/ to use the same structure as the other packages
* git grep -lE "\-e(\s+)\." | xargs sed -i -E "s/\-e(\s+)\./-e certbot/g"
* git grep -lE "\.\[dev\]" | xargs sed -i -E "s/\.\[dev\]/certbot[dev]/g"
* git grep -lE "\.\[dev3\]" | xargs sed -i -E "s/\.\[dev3\]/certbot[dev3]/g"
* Remove replacement of certbot into . in install_and_test.py
* copy license back out to main folder
* remove linter_plugin.py and CONTRIBUTING.md from certbot/MANIFEST.in because these files are not under certbot/
* Move README back into main folder, and make the version inside certbot/ a symlink
* symlink certbot READMEs the other way around
* move testdata into the public api certbot zone
* update source_paths in tox.ini to certbot/certbot to find the right subfolder for tests
* certbot version has been bumped down a directory level
* make certbot tests directory not a package and import sibling as module
* Remove unused script cruft
* change . to certbot in test_sdists
* remove outdated comment referencing a command that doesn't work
* Install instructions should reference an existing file
* update file paths in Dockerfile
* some package named in tox.ini were manually specified, change those to certbot
* new directory format doesn't work easily with pyargs according to http://doc.pytest.org/en/latest/goodpractices.html#tests-as-part-of-application-code
* remove other instance of pyargs
* fix up some references in _release.sh by searching for ' . ' and manual check
* another stray . in tox.ini
* fix paths in tools/_release.sh
* Remove final --pyargs call, and now-unnecessary call to modules instead of local files, since that's fixed by certbot's code being one layer deeper
* Create public shim around main and make that the entry point
* without pyargs, tests cannot be run from an empty directory
* Remove cruft for running certbot directly from main
* Have main shim take real arg
* add docs/api file for main, and fix up main comment
* Update certbot/docs/install.rst
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Fix comments in readthedocs requirements files to refer to current package
* Update .[docs] reference in contributing.rst
* Move plugins tests to certbot tests directory
* add certbot tests to MANIFEST.in so packagers can run python setup.py test
* move examples directory inside certbot/
* Move CHANGELOG into certbot, and create a top-level symlink
* Remove unused sys and logging from main shim
* nginx http01 test no longer relies on certbot plugins common test
Part of #5775. We don't use these docs anywhere, so delete them.
Removes:
- `certbot-nginx/readthedocs.org.requirements.txt`
- `certbot-nginx/docs/` folder
- docs include in `MANIFEST.in`
- docs dependencies in `setup.py`
* Remove unused nginx docs
* Add changelog entry about the removal
Part of #5775. We don't use these docs anywhere, so delete them.
Removes:
- `certbot-compatibility-test/readthedocs.org.requirements.txt`
- `certbot-compatibility-test/docs/` folder
- docs include in `MANIFEST.in`
- docs dependencies in `setup.py`
Part of #5775. We don't use these docs anywhere, so delete them.
Removes:
- `certbot-apache/readthedocs.org.requirements.txt`
- `certbot-apache/docs/` folder
- docs include in `MANIFEST.in`
- docs dependencies in `setup.py`
* Fix metadata & primary references in Augeas tests.
When performing actions only on one of the trees in DualNodeParser, the two
trees get out-of-sync. Similarly, we can't expect that the metadata between
the two trees will remain the same.
Did a pass over the tests to re-wire metadata and primary usage.
* Add ApacheParser skeleton.
Fix plumbing in configurator & dualparser to initialize ApacheParser
alongside AugeasParser.
* Silence coverage reports for now
Fixes#7184.
I updated #7358 to track the issue of unpinning all of these dependencies.
* pin back configargparse
* Pin back zope packages.
* update deps
* Add changelog entry.
* run build.py
When you try to run this script, it crashes with:
```
standard_init_linux.go:211: exec user process caused "exec format error"
```
This is caused by the script being written to have the contents:
```
\
#!/bin/sh
set -e
...
```
This fixes the problem by removing the slash and moving the shebang to the first line of the string.
Part of #5775. Methodology similar to #7528. Also refactors NGINX test util to use certbot.tests.util.ConfigTestCase.
* refactor nginx tests to no longer rely on certbot.configuration internals
* Move configuration.py to _internal
While coding for #7536, I ran into another issue. It appears that Certbot logs generated during the scheduled task execution have wrong permissions that make them almost unusable: they do not have an owner, and their ACL contains nonsense values (non existant accounts name).
The class `logging.handler.RotatingFileHandler` is responsible for these logs, and become mad when it is in a Python process run under a scheduled task owned by `SYSTEM`. This is precisely our case here.
This PR avoids (but not fix) the issue, by changing the owner of the scheduled task from `SYSTEM` to the `Administrators` group, that appears to work fine.
* Use Administrators group instead of SYSTEM to run the certbot renew task
Turned out that the scheduled task that runs `certbot renew` twice a day, is failing. Without any kind of log of course, otherwise it would not be fun.
It can be revealed by opening a powershell under the `NT AUTHORITY\SYSTEM` account, under which the scheduled task is run. Under theses circumstances, the bug is revealed: Certbot breaks when trying to invoke `certbot.compat.filesystem._get_current_user()`. Indeed the logic there implied to call `win32api.GetUserNameEx(win32api.NameSamCompatible)` and this function does not return always a useful value.
For normal account, it will be typically `DOMAIN_OR_MACHINE_NAME\YOUR_USER_NAME` (e.g. `My Machine\Adrien Ferrand`). But for the account `NT AUTHORITY\SYSTEM`, it will return `MACHINE_NAME\DOMAIN$`, which is a nonsense and makes fail the resolution of the actual SID of the account at the end of `_get_current_user()`.
This PR fixes this behavior by using an explicit construction of the account name that works both for normal users and `SYSTEM`.
* Use a different way to resolve current user account, that works both for normal users and SYSTEM.
* Add a comment to run Certbot under NT AUTHORITY\SYSTEM
Fixes#7548.
This PR udpdates installation instructions to get rid of python2 and certbot-auto in the how-to explaining the Certbot development environment setup.
Instead, Python 3 is used, and appropriate instructions for APT and RPM based distributions are provided.
* Don't call core constants from nginx plugin
* Move constants.py to _internal/
* Move ENHANCEMENTS from now-internal constants to public plugins.enhancements
* Update display.enhancements.ask from its 2015 comment
* Move display/completer.py to _internal/
* Move display/dummy_readline.py to _internal/
* Move display/enhancements.py to _internal/
* Create __init__.py in _internal/display
* Create _internal package for Certbot's non-public modules
* Move account.py to _internal
* Move auth_handler.py to _internal
* Move cert_manager.py to _internal
* Move client.py to _internal
* Move error_handler.py to _internal
* Move lock.py to _internal
* Move main.py to _internal
* Move notify.py to _internal
* Move ocsp.py to _internal
* Move renewal.py to _internal
* Move reporter.py to _internal
* Move storage.py to _internal
* Move updater.py to _internal
* update apache and nginx oldest requirements
* Keep the lock file as certbot.lock
* nginx oldest tests still need to rely on newer certbot
* python doesn't have good dependency resolution, so specify the transitive dependency
* update required minimum versions in nginx setup.py
In Travis, the full test suite doesn't run on PRs for point release branches, just on commits for them. I think this behavior makes sense because what we actually want to test before a point release is the exact commit we want to release after any squashing/merging has been done. This PR modifies Azure to match this behavior.
After this PR lands, I need to update the tests required to pass on GitHub.
This PR uses pipstrap to bootstrap the venv used to build Windows installers. This effectively pin all build dependencies, since pynsist is already installed through pip_install.py script.
* Use pipstrap
* Pin also NSIS version
* acme: re-populate uri in deactivate_authorization
* Use fresh authorizations in dry runs
--dry-run now deactivates 'valid' authorizations if it encounters them
when creating a new order.
Resolves#5116.
* remove unused code
* typo in local-oldest-requirements
* better error handling
* certbot-ci: AUTHREUSE to 100 + unskip dry-run test
* improve test coverage for error cases
* restore newline to local-oldest-requirements.txt
This pull request addresses #7451 by removing the deprecated flags.
* Dropped deprecated flags from commands
* Updated changelog for dropped flags and deleted outdated tests
* removed init-script part of apache test
I tried to finish up #7214 by removing the code in acme but we can't really do that until #7478 is resolved which we cannot do until we release 0.40.0.
Since we have to wait, this PR adds deprecation warnings for code that uses the TLS-SNI-01 code or was only used by the long deprecated TLS-SNI-01 code.
I'd like this PR to land before our next release.
* Deprecate more code related to TLS-SNI-01.
* Assert about warning message.
We don't package rebuild_dependencies.py so I don't think we need to mention changes to it in our changelog which is primarily read by users and packagers.
This pull request ensures that we use distro package in all the distribution version detection. It also replaces the custom systemd /etc/os-release parsing and adds a few version fingerprints to Apache override selection.
Fixes: #7405
* Revert "Try to use platform.linux_distribution() before distro equivalent (#7403)"
This reverts commit ca3077d034.
* Use distro for all os detection code
* Address review comments
* Add changelog entry
* Added tests
* Fix tests to return a consistent os name
* Do not crash on non-linux systems
* Minor fixes to distro compatibility checks
* Make the tests OS independent
* Update certbot/util.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Skip linux specific tests on other platforms
* Test fixes
* Better test state handling
* Lower the coverage target for Windows tests
`certbot-compatibility-test` is using code in `acme` that I proposed making private and not trivially importable in https://github.com/certbot/certbot/issues/5775.
To fix it, I switched to using Certbot's test utilities which I proposed keeping public to help with writing tests for plugins. When doing this I had to change the name of the key because `rsa1024_key.pem` doesn't exist in Certbot.
I also deleted the keys in `certbot-compatibility-test`'s testdata because because they are unused.
This is a big part of #7214. It removes all references to TLS-SNI-01 outside of acme (and pytest.ini). Those changes will come in a subsequent PR. I thought this one was getting big enough.
* Remove references to TLS-SNI-01 in Apache plugin
* Remove references to TLS-SNI-01 from certbot-nginx
* Remove references to TLS-SNI from Certbot.
* Remove TLS-SNI reference from docs
* add certbot changelog
* Clarify test behavior
I wanted to polish the changelog a bit. Changes made are:
* We don't ship our test farm tests so including info about them in our changelog seems unnecessary.
* I combined and expanded the info about the deprecation of Python 3.4.
While working on #7214, I noticed that certbot.plugins.common.TLSSNI01 wasn't printing a deprecation warning and it was still being used in our Apache plugin. This PR fixes that.
* find_comments implementation and AugeasCommentNode creation
* Use dummy value for ancestor
* Add NotImplementedError when calling find_comments with exact parameter
* Remove parameter 'exact' from find_comments interface
* Fix comment
Fixes#7007
Python 3.4 is [EOL](https://www.python.org/dev/peps/pep-0429/), and only Python 3.x version available for CentOS 6 through EPEL is this version, and so is used by `certbot-auto`, the only official way to install Certbot on this platform.
This unpleasant situation becomes a little more uncomfortable, considering that the newest `pip` version (19.2) [just dropped Python 3.4 support](https://github.com/pypa/pip/issues/6685) and will refuse to start on this Python version. We can expect a lot of dependencies to follow this path now.
One direct result of this situation is that a fix to support correctly the ARM platforms requires to upgrade `pip` to 19.2 for `certbot-auto`. So this is not possible right now.
Then, let's upgrade Certbot instances on CentOS 6 to a supported version of Python 3.
This PR proposes a new bootstrap approach for CentOS 6 platform, `BootstrapRpmPython3Legacy`, that will install Python 3.6 from [SCL](https://www.softwarecollections.org) (the latest one available for now on CentOS 6). In term of Python 3 specific bootstrap methods, I take the occasion here to completely separate the bootstrap of CentOS 6 as a legacy system, from the RPM-based newest systems (like Fedora 29+) that are simply dropping support for Python 2.x. This is in prevision of future migration for all systems on Python 3.x, that is a different problematic than supporting old systems.
* Add logic
* Rebuilt letsencrypt-auto
* Fix logic
* Focus on specific packages
* Maintain PATH for further invocations of letsencrypt-auto after bootstrap.
* Various corrections
* Fix farm test for RHEL6
* Working centos6 letsencrypt-auto self tests
* Fix test_sdist for CentOS 6
* Corrections
* Work in progress
* Working configuration
* Fix typo
* Remove EPEL. Add a test.
* Update letsencrypt-auto-source/letsencrypt-auto.template
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Improvements after review
* Improvements
* Add a comment
* Add a test
* Update a test
* Corrections
* Update function return
* Work in progress
* Correct behavior on oracle linux 6.
* Corrections
* Rebuild script
* Add letsencrypt-auto tests for oraclelinux6
* Update tox.ini
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/letsencrypt-auto
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/tests/oraclelinux6_tests.sh
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/letsencrypt-auto.template
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/letsencrypt-auto
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/letsencrypt-auto
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/letsencrypt-auto.template
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/tests/oraclelinux6_tests.sh
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Remove specific code for scientific linux
* Change some variables names
* Update letsencrypt-auto-source/tests/oraclelinux6_tests.sh
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Various corrections
* Fix tests
* Add a comment
* Update message
* Fix test message
* Update letsencrypt-auto-source/letsencrypt-auto.template
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/letsencrypt-auto
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/letsencrypt-auto
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update scripts
* More focused assertion
* Add back a test
* Update script
* Update letsencrypt-auto-source/letsencrypt-auto.template
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update letsencrypt-auto-source/letsencrypt-auto.template
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Check quiet mode
* Add changelog
* Update letsencrypt-auto-source/tests/oraclelinux6_tests.sh
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
This PR creates a pipeline triggered on tag push matching v0.* (eg. v0.40.0).
Once triggered, this pipeline will build the windows installer, and run integration tests on it, like for the pipeline run nightly.
I also add a simple script to extract from CHANGELOG.md file to extract the relevant part to put it in the body of the GitHub release. I believe it makes things nicer.
* Create release pipeline
* Relax condition on tags
* Put beta keyword
* Update job name
* Fix release pipeline
Over the weekend, nightly tests on Windows failed for certbot-dns-google: https://dev.azure.com/certbot/web/build.aspx?pcguid=74ef9c03-9faf-405b-9d03-9acf8c43e8d6&builduri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f72
The error occurred inside `oauth2client`'s locking code and the failure seems spurious as it did not reproduce this morning: https://dev.azure.com/certbot/certbot/_build/results?buildId=73
I could not find a relevant changelog entry in `oauth2client` saying they've fixed the problem, but the problematic code no longer exists in `oauth2client>=4.0`. This PR updates our minimum dependency required in an attempt to avoid spurious failures for us in the future. The only downside I am aware of is it'll make it harder for certbot-dns-google to be packaged in Debian Old Stable or Ubuntu 16.04, but I don't expect either of those things to happen anytime soon.
* bump oauth2client dep
* Update dev_constraints.txt.
* Add changelog entry for packagers.
* Add one Dockerfile for each supported architecture
* Update multi arch hooks
* Create multi arch scripts
* Update README.md
* WIP. Use build args instead of multiple Dockerfiles in build script
* WIP. Fix typo mistake
* Use build args instead of multiple Dockerfiles in build script
* WIP. Build all the architectures in one DockerHub build
* Add arm64v8 architecture
* WIP. Testing build all the architectures in one DockerHub build
* Revert "WIP. Testing build all the architectures in one DockerHub build"
This reverts commit 94a89398a4120b183d2851ac7cb9c93db0e3d187.
* Refactor tag docker images in hooks/post_push files
* Use variables instead of positional arguments
* Export externally used variables
* Use ${variable//search/replace} instead of echo $variable | sed.
* Update README.md
* Add Cleanup in build.sh script
* Fix tagging error in post_push hook
* Add "-ex" flags to bash script
* Test tagging images in build hook
* Tagging in hook/post_build instead of hook/post_push
* Push built architecture dependent image
* Use Dockerfile argument instead of fixed value
* Fix typo
* Use parameter instead of global variable
* Use custom "hook/push" to prevent duplicated push
* Make a short doctype for each function declared in common
The value of --server will now be respected, except when it is the
default value, in which case it will be changed to the staging server,
preserving Certbot's existing behavior.
When setting up Azure Pipelines, I didn't like having to define codecov_token for each pipeline. This works around it by using a shared variable group.
You can see this working successfully at https://dev.azure.com/certbot/certbot/_build/results?buildId=3.
* Use certbot-common.
* update instructions
This PR defines pipelines that can be run on Azure Pipelines. Currently there are two:
* `.azure-pipelines/main.yml` is the main one, executed on PRs for master, and pushes to master,
* `.azure-pipelines/advanced.yml` add installer testing on top of the main pipeline, and is executed for `test-*` branches, release branches, and nightly run for master.
These two pipelines covers all existing stuff done by AppVeyor currently, and so AppVeyor can be decommissioned once Azure Pipelines is operational.
You can see working pipeline in my fork:
* a PR for `master` (so using main pipeline): https://github.com/adferrand/certbot/pull/65
* a PR for `test-something` (so using advanced pipeline): https://github.com/adferrand/certbot/pull/66
* uploaded coverage from Azure Pipelines: 499aa2cbf2/build
Once this PR is merged, we need to enable Azure Pipelines for Certbot. Instructions are written in `azure-pipelines/INSTALL.md`. This document also references all access rights required to Azure Pipelines onto GitHub to make the CI process work.
Future work for future PRs:
* create a CD pipeline for the releases that will push the installer to GitHub releases
* implement a solution to generate notification on IRC or Mattermost when a nightly build fails
* Define pipelines
* Update locations
* Update nightly
* Use x86
* Update nightly.yml for Azure Pipelines
* Run script
* Use script
* Update install
* Use local installation
* Register warnings
* Fix pywin32 loading
* Clean context
* Enable coverage publication
* Consume codecov token
* Document installation
* Update tool to upload coverage
* Prepare pipeline artifacts
* Update artifact ignore
* Protect against codecov failures
* Add a comment about codecov
* Add a comment on RW access asked by Azure
* Add instructions
* Rename pipeline file
* Update instructions
* Update .azure-pipelines/templates/tests-suite.yml
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/INSTALL.md
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Modified scheduled pipeline
* Add comment
* Remove dynamic version-based installer name
* Remove unnecessary account ID match check.
Right now the Account object calculates an ID using md5. This is
unnecessary and causes problems on FIPS systems that forbid md5. It's
just as good to pick a random series of bytes for the ID, since the ID
gets read out of renewal/foo.conf.
However, if we switched the algorithm right now, we could wind up
breaking forward compatibility / downgradeability, since older versions
would run into this check.
Removing this check now lays the ground to change the ID-calculation
algorithm in the future.
Related to #1948 and
https://github.com/certbot/certbot/pull/1013#issuecomment-149983479.
* Remove test.
* Remove unused import.
The repo description for the [3rd party Icecast plugin](https://github.com/e00E/lets-encrypt-icecast) says that the plugin isn't currently working and the repository hasn't been updated since 2017. Since it seems broken and unmaintained, let's remove it from the list of third party plugins.
I would happily add it again to the list of third party plugins if people fix and maintain it.
The README for the [3rd party heroku plugin](https://github.com/gboudreau/certbot-heroku) says it has been deprecated. Because of this, let's remove it from the list of third party plugins.
Try to primarily fall back to using `platform.linux_distribution()` if `/etc/os-release` isn't available. Only use `distro.linux_distribution()` on Python >= 3.8.
* Try to use platform.linux_distribution() before distro equivalent
* Fix tests for py38
* Added changelog entry
This PR fixes a regression in #7337 (0.38.0) that certbot cannot run with Apache on RHEL 6.
In RHEL 6, `distro.linux_distribution()` returns `RedHatEnterpriseServer`.
In RHEL 6:
```py
>>> import distro
>>> distro.linux_distribution()
(u'RedHatEnterpriseServer', u'6.10', u'Santiago')
>>> import platform
>>> platform.linux_distribution()
('Red Hat Enterprise Linux Server', '6.10', 'Santiago')
```
In RHEL 7:
```py
>>> import distro
>>> distro.linux_distribution()
('Red Hat Enterprise Linux Server', '7.6', 'Maipo')
>>> import platform
>>> platform.linux_distribution()
('Red Hat Enterprise Linux Server', '7.6', 'Maipo')
```
* fix to run with Apache on RHEL 6
* fix docs
Fixes#7368.
When updating the changelog, I replaced the line about running tests on Python 3.8 because I personally think that support for Python 3.8 is the most relevant information for our users/packagers about our changes in this area.
* List support for Python 3.8.
* Update changelog.
Fixes#7152.
* don't check ocsp if cert is expired when getting cert information
* don't check ocsp if the cert is expired in ocsp_revoked
* update tests
* update changelog
* move pytz import to the top of the test file
This PR implements the item "register a scheduled task for certificate renewal" from the list of requirements described in #7365.
This PR adds required instructions in the NSIS installer for Certbot to create a task, named "Certbot Renew Task" in the Windows Scheduler. This task is run twice a day, to execute the command certbot renew and keep the certificates up-to-date.
Uninstalling Certbot will also remove this scheduled task.
* Implementation
* Corrections
* Update template.nsi
* Improve scripts
* Add a random delay of 12 hours
* Synchronize template against default one in pynsist 2.4
* Clean config of scheduled task
* Install only in AllUsers mode
* Add comments
* Remove the logic of single user install
* DualParserNode, DualCommentNode and DualDirectiveNode implementations
* Add DualBlockNode
* Address review comments
* Address review comments
* Call the right assertion after name change
* Simplify isPass
* Add explanation to _create_matching_list pydoc
* Break when match was found
* Get integration tests working on python 3.8
* Run unit tests on py38
* Update coveragercs to use coverage 4.5+ format
* remove line added to tox.ini
* update changelog
* xenial is the new travis default; no need to specify in .travis.yml
Add metadata keyword argument to the ParserNode interface, allowing the initialization of the object from contents of the metadata - if the implementation allows it. As an example, Augeas implementation needs nothing more than the Augeas DOM path of a configuration directive to be able to populate the ParserNode instance with all data relevant to the DirectiveNode.
The checks also allow skipping the otherwise required keyword arguments if metadata is provided.
* Allow creating ParserNode instances using information from metadata dictionary
* Update certbot-apache/certbot_apache/interfaces.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update certbot-apache/certbot_apache/interfaces.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Address review comments
* Fix filepath comment
* Update certbot-apache/certbot_apache/interfaces.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
Fixes#7212
This PR forbid os.stat and os.fstat, and fix or provide alternatives to avoid its usage in certbot outside of certbot.compat.filesystem.
* Reimplement private key mode propagation
* Remove other os.stat
* Remove last call of os.stat in certbot package
* Forbid stat and fstat
* Implement mode comparison checks
* Add unit tests
* Update certbot/compat/filesystem.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update certbot/compat/filesystem.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Handle case where multiple ace concerns a given SID in has_min_permissions
* Add a new test scenario
* Add a simple test for has_same_ownership
* Fix name function
* Add a comment explaining an ACE structure
* Move a test in its dedicated class
* Improve a message error
* Calculate has_min_permission result using effective permission rights to be more generic.
* Change an exception message
* Add comments, avoid to skip a test.
* Update certbot/compat/filesystem.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Find OpenSSL version
* Create and update various config files
* Update logic to use new version constraints
* SSL_OPTIONS_HASHES_NEW and SSL_OPTIONS_HASHES_MEDIUM were just being used for testing, and maintaining them is becoming untenable, so remove them.
* if we don't know the openssl version, we can't turn off session tickets
* add unit test for _get_openssl_version
* add unit tests
* placate lint
* Fix docs and tests and clean up code
* use python correctly
* update changelog
* Lint
* make comment a comment
This PR is the first step to create an official distribution channel of Certbot for Windows. It consists essentially in creating a proper Certbot Windows installer.
Usually distributing an application requires, in a way or another, to stabilize the application logic and its dependencies around a given version. On Windows, this usually takes the form of a freezed application, that vendors its dependencies into a single executable.
There are two well-known solutions to create an executable shipping a Python application on Windows: [py2exe](http://www.py2exe.org/) and [pyinstaller](https://www.pyinstaller.org/). However these solutions create self-executable `.EXE` files: you run the `.EXE` file that launches immediately the software.
This is not a end-user solution. Indeed when a Windows user wants to install a piece of software, he expects to find and download an installer. When run the installer would interface with Windows to setup configuration entries in the Registry, update the environment variable, add shortcuts in the Start Menu, and declare a uninstaller entry into the Uninstaller Manager. Quite similarly, this is what you would get from a `.deb` or `.rpm` package.
A solution that builds proper installers is [pynsis](https://pynsist.readthedocs.io/en/latest/). It is a Python project that constructs installers for Python software using [NSIS](https://sourceforge.net/projects/nsis/), the most known free Windows installer builder solution.
This PR uses pynsist to build a Windows installer. The Python script to launch the installer build is `.\windows-installer\construct.py`. Once finished, the installer is located in `.\windows-installer\build\nsis`.
This installer will do the following operations during the installation:
* copy in the install path a full python distribution used exclusively for Certbot
* copy all Python requirements gathered from the `setup.py` of relevant certbot projects
* copy `certbot` and `acme`
* pre-build python binary assets
* register the existence of the application correctly in Windows Registry
* prepare a procedure to uninstall Certbot
* and of course, expose `certbot` executable to the Windows command line, like on Linux, to be able to launch it as any CLI application from Batch or Powershell
This installer support updates: downloading a new version of it and running it on a Windows with existing installation of Certbot will replace it with the new version.
Future capabilities not included in this PR:
* auto-update of Certbot when a new release is available
* online documentation for Windows
* register a scheduled task for certificate renewal
* installer distribution (continuous deployment + distribution channels)
* method to check the downloaded installer is untampered
* Setup config
* Fix shortcut
* Various improvments
* Update windows-installer/construct.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Split into several method
* Change installer name
* Remove DNS plugins for now
* Add a comment about administrator privileges
* Update welcome
* Control python version
* Control bitness
* Update windows-installer/construct.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update windows-installer/construct.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
* Update windows-installer/construct.py
Co-Authored-By: Brad Warren <bmw@users.noreply.github.com>
This PR contains the changes requested in initial pre-review comments of #7308
Move properties to class pydocs in interfaces.py
Prefer class ABC register() functionality instead of class inheritance for interface classes
Add apache implementation specific functions to interfaces
* Move class argument definitions to class pydoc
* Add apache specific functionality to the interface
* Bring inheritance back
* Define initialization for different ParserNode classes
* Add parsernode utils to check keyword arguments and document the defaults in pydoc
* Fix pydocs and make BlockNode a child of DirectiveNode
* Refine docs, and remove unused __init__ from BlockNode
* Split parsernode util tests to their own respective file
* Skip cover for dummy calls to super
* Add types to method documentation
* Add documentation for children
Smallest possible fix for #7106
* Replace platform.linux_dependencies with distro.linux_dependencies
* run build.py
* Add minimum version of 1.0.1
* Pin back requests package
* Update changelog
This PR supersedes #7353.
It fixes the execution of nginx oldest tests when these tests are executed on top of the modifications made in #7337. This execution failure revealed the fact that in some cases, the wrong version of certbot logic was used during integration tests (namely the logic lying in the codebase of the branch built, instead of the logic from the version of certbot declared by certbot-nginx for instance).
I let you appreciate my inline comment for the explanation and the workaround.
Thanks a lot to @bmw who found this python/pytest madness.
You can see the oldest tests succeeding with the logic of #7337 + this PR here: https://travis-ci.com/certbot/certbot/builds/124816254
* Remove certbot root from PYTHONPATH during integration tests
* Add a biiiiig comment.
On Windows you can have several drives (`C:`, `D:`, ...), that is the roughly (really roughly) equivalent of mount points, since each drive is usually associated to a specific physical partition.
So you can have paths like `C:\one\path`, `D:\another\path`.
In parallel, `os.path.relpath(path, start='.')` calculates the relative path between the given `path` and a `start` path (current directory if not provided). In recent versions of Python, `os.path.relpath` will fail if `path` and `start` are not on the same drive, because a relative path between two paths like `C:\one\path`, `D:\another\path` is not possible.
In saw unit tests failing because of this in two locations. This occurs when the certbot codebase that is tested is on a given drive (like `D:`) while the default temporary directory used by `tempfile` is on another drive (most of the time located in `C:` drive).
This PR fixes that.
* Use travis_retry in travis builds to retry the farm tests
* travis_retry is a bash function, so it can be called only from current bash
* Update .travis.yml
* Update .travis.yml
This PR adds OVERRIDE_CLASS in certbot-apache/entrypoint.py for Scientific Linux. Fixes#7248.
* add OVERRIDE_CLASS for Scientific Linux os name
* add entry for Scientific Linux using "scientific" as key
* Update changelog
Let's begin. All pipelines are defined in `.azure-pipelines`. Currently there are two:
*`.azure-pipelines/main.yml` is the main one, executed on PRs for master, and pushes to master,
*`.azure-pipelines/advanced.yml` add installer testing on top of the main pipeline, and is executed for `test-*` branches, release branches, and nightly run for master.
Several templates are defined in `.azure-pipelines/templates`. These YAML files aggregate common jobs configuration that can be reused in several pipelines.
Unlike Travis, where CodeCov is working without any action required, CodeCov supports Azure Pipelines
using the coverage-bash utility (not python-coverage for now) only if you provide the Codecov repo token
using the `CODECOV_TOKEN` environment variable. So `CODECOV_TOKEN` needs to be set as a secured
environment variable to allow the main pipeline to publish coverage reports to CodeCov.
This INSTALL.md file explains how to configure Azure Pipelines with Certbot in order to execute the CI/CD logic defined in `.azure-pipelines` folder with it.
During this installation step, warnings describing user access and legal comitments will be displayed like this:
```
!!! ACCESS REQUIRED !!!
```
This document suppose that the Azure DevOps organization is named _certbot_, and the Azure DevOps project is also _certbot_.
Use your GitHub user for a normal GitHub account, or a user that has administrative rights to the GitHub organization if relevant.
### Having an Azure DevOps account
- Go to https://dev.azure.com/, click "Start free with GitHub"
- Login to GitHub
```
!!! ACCESS REQUIRED !!!
Personal user data (email + profile info, in read-only)
```
- Microsoft will create a Live account using the email referenced for the GitHub account. This account is also linked to GitHub account (meaning you can log it using GitHub authentication)
- Proceed with account registration (birth date, country), add details about name and email contact
```
!!! ACCESS REQUIRED !!!
Microsoft proposes to send commercial links to this mail
Azure DevOps terms of service need to be accepted
```
_Logged to Azure DevOps, account is ready._
### Installing Azure Pipelines to GitHub
- On GitHub, go to Marketplace
- Select Azure Pipeline, and "Set up a plan"
- Select Free, then "Install it for free"
- Click "Complete order and begin installation"
```
!!! ACCESS !!!
Azure Pipeline needs RW on code, RO on metadata, RW on checks, commit statuses, deployments, issues, pull requests.
RW access here is required to allow update of the pipelines YAML files from Azure DevOps interface, and to
update the status of builds and PRs on GitHub side when Azure Pipelines are triggered.
Note however that no admin access is defined here: this means that Azure Pipelines cannot do anything with
protected branches, like master, and cannot modify the security context around this on GitHub.
Access can be defined for all or only selected repositories, which is nice.
```
- Redirected to Azure DevOps, select the account created in _Having an Azure DevOps account_ section.
- Select the organization, and click "Create a new project" (let's name it the same than the targeted github repo)
- The Visibility is public, to profit from 10 parallel jobs
```
!!! ACCESS !!!
Azure Pipelines needs access to the GitHub account (in term of being able to check it is valid), and the Resources shared between the GitHub account and Azure Pipelines.
```
_Done. We can move to pipelines configuration._
## Import an existing pipelines from `.azure-pipelines` folder
- On Azure DevOps, go to your organization (eg. _certbot_) then your project (eg. _certbot_)
- Click "Pipelines" tab
- Click "New pipeline"
- Where is your code?: select "__Use the classic editor__"
__Warning: Do not choose the GitHub option in Where is your code? section. Indeed, this option will trigger an OAuth
grant permissions from Azure Pipelines to GitHub in order to setup a GitHub OAuth Application. The permissions asked
then are way too large (admin level on almost everything), while the classic approach does not add any more
permissions, and works perfectly well.__
- Select GitHub in "Select your repository section", choose certbot/certbot in Repository, master in default branch.
- Click on YAML option for "Select a template"
- Choose a name for the pipeline (eg. test-pipeline), and browse to the actual pipeline YAML definition in the
:warning: __Pipeline has failed__: [Link to the build](https://dev.azure.com/$(Build.Repository.ID)/_build/results?buildId=$(Build.BuildId)&view=results)\n\n\
---"
curl -i -X POST --data-urlencode "payload={\"text\":\"${MESSAGE}\"}" "$(MATTERMOST_URL)"
.. This file contains a series of comments that are used to include sections of this README in other files. Do not modify these comments unless you know what you are doing. tag:intro-begin
Certbot is part of EFF’s effort to encrypt the entire Internet. Secure communication over the Web relies on HTTPS, which requires the use of a digital certificate that lets browsers verify the identity of web servers (e.g., is that really google.com?). Web servers obtain their certificates from trusted third parties called certificate authorities (CAs). Certbot is an easy-to-use client that fetches a certificate from Let’s Encrypt—an open certificate authority launched by the EFF, Mozilla, and others—and deploys it to a web server.
Anyone who has gone through the trouble of setting up a secure website knows what a hassle getting and maintaining a certificate is. Certbot and Let’s Encrypt can automate away the pain and let you turn on and manage HTTPS with simple commands. Using Certbot and Let's Encrypt is free, so there’s no need to arrange payment.
How you use Certbot depends on the configuration of your web server. The best way to get started is to use our `interactive guide <https://certbot.eff.org>`_. It generates instructions based on your configuration settings. In most cases, you’ll need `root or administrator access <https://certbot.eff.org/faq/#does-certbot-require-root-administrator-privileges>`_ to your web server to run Certbot.
Certbot is meant to be run directly on your web server, not on your personal computer. If you’re using a hosted service and don’t have direct access to your web server, you might not be able to use Certbot. Check with your hosting provider for documentation about uploading certificates or using certificates issued by Let’s Encrypt.
Certbot is a fully-featured, extensible client for the Let's
* Can optionally install a http -> https redirect, so your site effectively
runs https only (Apache only)
* Fully automated.
* Configuration changes are logged and can be reverted.
* Supports an interactive text UI, or can be driven entirely from the
command line.
* Free and Open Source Software, made with Python.
.. Do not modify this comment unless you know what you're doing. tag:features-end
For extensive documentation on using and contributing to Certbot, go to https://certbot.eff.org/docs. If you would like to contribute to the project or run the latest code from git, you should read our `developer guide <https://certbot.eff.org/docs/contributing.html>`_.
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from https://www.sphinx-doc.org/)
@@ -137,6 +136,7 @@ class VirtualHost(object): # pylint: disable=too-few-public-methods
self.enabled=enabled
self.modmacro=modmacro
self.ancestor=ancestor
self.node=node
defget_names(self):
"""Return a set of all names."""
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.