I think we should use our `pip_install*` scripts wherever we can and I'm not quite sure yet if I'd call `repoze.sphinx.autointerface` unmaintained.
* use pip_install_editable
* update sphinx comment
Since Saturday the CI pipeline is failing due to several Sphinx errors. See https://dev.azure.com/certbot/certbot/_build/results?buildId=3928&view=logs&j=d74e04fe-9740-597d-e9fa-1d0400037dfd&t=dde413a4-f24c-59a0-9684-e33d79f9aa02
First, the build of certbot-dns-google is failing because of a particular configuration. It seems that this configuration has been written here to activate the support of the RST instruction `.. code-block:: json` in documentation. However, it does not seem to be necessary for a similar situation in certbot-dns-route53 documentation. So let's try to remove it and fix the Sphinx builds.
Second, Sphinx builds were not pinning dependencies, so Sphinx 4.x (that has been released yesterday) started to be used in the pipeline. Sadly this new version is not compatible with the plugin `repoze.sphinx.autointerface`, used to extract documentation from `zope.interface`. So I fixed the pinning and also explicitly pin Sphinx to 3.5.x for now.
Technically speaking the second action is sufficient to fix the first error, but I keep the dedicated solution because it improves the documentation in my opinion.
This situation could be fixed by not requiring `repoze.sphinx.autointerface`, but this is possible only if we remove `zope.interface` from Certbot. Luckily I started the work few days ago ;).
* Remove explicit lexer call in certbot-dns-google doc builds.
* Write a valid JSON file in the documentation
* Apply constraints to sphinx build environments
* Pin Sphinx to 3.5.4
* Update dependencies
* Pin traitlets
Fixes https://github.com/certbot/certbot/issues/8832.
[These instructions are creating confusion among users](https://github.com/certbot/certbot/issues/8832) and [frustration among packagers](https://pagure.io/fesco/issue/2570) for whom the warning at the top of the OS packaging section doesn't apply. Because of this, I think we should remove them in favor of our instruction generator and snap/docker/pip instructions.
I also told Fedora packagers that we could probably do this in response to them continuing to improve their Certbot packages which they've done through things like the renewal timer that is now enabled by default.
Fixes https://github.com/certbot/certbot/issues/8781.
This PR makes our test farm tests into a normal package so it and its dependencies can be tracked and installed like our other packages.
Other noteworthy changes in this PR:
* Rather than continuing to place logs in your CWD, they're placed in a temporary directory that is printed to the terminal.
* `tests/letstest/auto_targets.yaml` was deleted rather than renamed because the file is no longer used.
* make a letstest package
* remove deleted deps
* fix letstest install
* add __init__.py
* call main
* Explicitly mention activating venv
* rerename file
* fix version.py path
* clarify "this"
* Use >= instead of caret requirement
* Update assertTrue/False to Python 3 precise asserts
* Fix test failures
* Fix test failures
* More replacements
* Update to Python 3 asserts in acme-module
* Fix Windows test failure
* Fix failures
* Fix test failure
* More replacements
* Don't include the semgrep rules
* Fix test failure
* Move version.py to tests/letstest since it's used by test_sdists.sh
* Delete unused components of certbot-auto
* Remove test_leauto_upgrades.sh and references to it
* Remove test_letsencrypt_auto_certonly_standalone.sh and references to it
* Remove outstanding references to certbot-auto
* Remove references to letsencrypt-auto
* find certbot in the correct directory
* delete letsencrypt-auto-source line from .isort.cfg since that directory no longer contains any python code
* remove (-auto) from certbot(-auto)
* delete line from test
* Improve style for version.py
Fixes#8802.
Also removed the unused `kgs` cruft while I was here, since it's leftover from the [initial release commit](3c08b512c3) and I'm pretty sure we don't use that anymore.
* Expand manual DNS challenge instructions
* Less jargon
Co-authored-by: ohemorange <ebportnoy@gmail.com>
* Less is more
Co-authored-by: ohemorange <ebportnoy@gmail.com>
* Make more clear where to look at Googles Toolbox
* Reshuffle text
* Show verify instructions only on last dns-01 challenge
* Swap domain and value
* Remove '(also)'
* Fix DNS verify message for mixed challenge types
* Add a lengthy comment about why there's a full stop after `{domain}`
* Typo
Co-authored-by: ohemorange <ebportnoy@gmail.com>
In https://github.com/certbot/certbot/pull/8748#discussion_r605457670 we discussed about changing the dict used to set OS options for Apache configurators into a dedicated object.
* Create _OsOptions class to configure the os specific options of the Apache configurators
* Fix tests
* Clean imports
* Fix naming
* Fix compatibility tests
* Rename a class
* Ensure restart_cmd_alt is set for specific OSes.
* Add docstring
* Fix override
* Fix coverage
This is one of the things that newer versions of `pylint` complains about.
* git grep -l super\( | xargs sed -i 's/super([^)]*)/super()/g'
* fix spacing
I think this PR improves tools/snap/build_remote.py's output in a number of ways such as:
* Logs of snap builds were being deleted because they weren't being copied out of the temporary directory added in https://github.com/certbot/certbot/pull/8719.
* The lock should now always be acquired before printing output when multiple processes are running which helps prevent processes mixing their output with each other.
* Output is never buffered which ensures that repeated calls to `print` from the same process while it holds the output lock is kept together.
* The case where we printed output about the "chroot problem" and stopped retrying the build has been deleted because with the fix in https://github.com/certbot/certbot/pull/8719, we should be able to recover in this case.
* If the build failed for any reason, we dump as much output about the problem as we can. I think most times we won't need to read this output, but I personally prefer it being there in case we want it for some reason. Due to this change, I also simplified `_build_snap` and `_dump_results` a bit since `_build_snap` handles printing logs as needed.
* print more output
* lock when printing output
* clarify purpose of lock
* preserve logfiles
* python better
* consistently flush output
* remove workspaces dict
* rename variable
* remove unused variable
* don't use all which exits early
* fix typo
Built on top of #8748, this PR reenables mypy strict mode and adds the appropriate corrections to pass the types checks.
* Upgrade mypy
* First step for acme
* Cast for the rescue
* Fixing types for certbot
* Fix typing for certbot-nginx
* Finalize type fixes, configure no optional strict check for mypy in tox
* Align requirements
* Isort
* Pylint
* Protocol for python 3.6
* Use Python 3.9 for mypy, make code compatible with Python 3.8<
* Pylint and mypy
* Pragma no cover
* Pythonic NotImplemented constant
* More type definitions
* Add comments
* Simplify typing logic
* Use vararg tuple
* Relax constraints on mypy
* Add more type
* Do not silence error if target is not defined
* Conditionally import Protocol for type checking only
* Clean up imports
* Add comments
* Align python version linting with mypy and coverage
* Just ignore types in an unused module
* Add comments
* Fix lint
* Work in progress
* Finish type control
* Isort
* Fix pylint
* Fix imports
* Fix cli subparser
* Some fixes
* Coverage
* Remove --no-strict-optional (obviously...)
* Update certbot-apache/certbot_apache/_internal/configurator.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update certbot/certbot/_internal/display/completer.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Cleanup dns_google
* Improve lock controls and fix subparser
* Use the expected interfaces
* Fix code
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#8773
I took option 2 from the issue mentionned above (importing `typing-extensions` on dev dependencies) to avoid modifying certbot runtime requirements given that what needs to be added is useful for mypy only.
I did not change the Python version used to execute the linting and mypy on the standard tests, given that the tox `docker_dev` target already checks if the development environment is working for Python < 3.8.
We were originally using `socket.errno` with a `type: ignore` and a comment suggesting that this attribute needs to be included in the typeshed. This is incorrect.
While it's true that [socket imports errno](43682f1e39/Lib/socket.py (L58)), it's not intended to be part of its API. https://docs.python.org/3/library/socket.html has no mention of it.
Instead, we should be using the standard `errno` module and remove this `type: ignore`.
Some are no longer needed and other's comments are out of date.
For the changes to the acme nonce errors, `Exception` doesn't take kwargs. The error message about this our own classes isn't super helpful:
```
In [2]: BadNonce('nonce', 'error', foo='bar')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-54555658ef99> in <module>
----> 1 BadNonce('nonce', 'error', foo='bar')
TypeError: __init__() got an unexpected keyword argument 'foo'
```
but if you try this on `Exception` which these classes inherit from, you get:
```
In [4]: Exception(foo='bar')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-028b924f74c5> in <module>
----> 1 Exception(foo='bar')
TypeError: Exception() takes no keyword arguments
```
See https://github.com/python/typeshed/pull/2348 for more info.
* remove outdated ignores
* update locking ignore comment
* don't accept kwargs
Fixes#8425
This PR upgrades mypy to the latest version available, 0.812.
Given the advanced type inference capabilities provided by this newer version, this PRs also fixes various type inconsistencies that are now detected. Here are the non obvious changes done to fix types:
* typing in mixins has been solved using `Protocol` classes, as recommended by mypy (https://mypy.readthedocs.io/en/latest/more_types.html#mixin-classes, https://mypy.readthedocs.io/en/stable/protocols.html)
* `cast` when we are playing with `Union` types
This PR also disables the strict optional checks that have been enable by default in recent versions of mypy. Once this PR is merged, I will create an issue to study how these checks can be enabled.
`typing.Protocol` is available only since Python 3.8. To keep compatibility with Python 3.6, I try to import the class `Protocol` from `typing`, and fallback to assign `object` to `Protocol` if that fails. This way the code is working with all versions of Python, but the mypy check can be run only with Python 3.8+ because it needs the protocol feature. As a consequence, tox runs mypy under Python 3.8.
Alternatives are:
* importing `typing_extensions`, that proposes backport of newest typing features to Python 3.6, but this implies to add a dependency to Certbot just to run mypy
* redesign the concerned classes to not use mixins, or use them differently, but this implies to modify the code itself even if there is nothing wrong with it and it is just a matter of instructing mypy to understand in which context the mixins can be used
* ignoring type for these classes with `# type: ignore` but we loose the benefit of mypy for them
* Upgrade mypy
* First step for acme
* Cast for the rescue
* Fixing types for certbot
* Fix typing for certbot-nginx
* Finalize type fixes, configure no optional strict check for mypy in tox
* Align requirements
* Isort
* Pylint
* Protocol for python 3.6
* Use Python 3.9 for mypy, make code compatible with Python 3.8<
* Pylint and mypy
* Pragma no cover
* Pythonic NotImplemented constant
* More type definitions
* Add comments
* Simplify typing logic
* Use vararg tuple
* Relax constraints on mypy
* Add more type
* Do not silence error if target is not defined
* Conditionally import Protocol for type checking only
* Clean up imports
* Add comments
* Align python version linting with mypy and coverage
* Just ignore types in an unused module
* Add comments
* Fix lint
I recently noticed that we only support versions of `setuptools` that support environment markers which allows us to simplify our `setup.py` files a bit.
In #8649 we added some code to trick pynsist and make it understand that `abi3` wheels for Windows are forward compatible, meaning that the cryptography wheel tagged `cp36-abi3` is in fact compatible with Python 3.6+, and not only Python 3.6.
Since pynsist 2.7 the tool now understand `abi3` wheels properly, and this trick is not needed anymore.
Please note that despite modifying the pynsist pinning in `dev_constraints.txt`, it will have no effect since pynsist currently escape the pinning system. This is handled in https://github.com/certbot/certbot/pull/8749.
* Pin pynsist
* Update dependencies
* Set windows installer a proper python project
* Optimize usage of the venvs
* Add windows-installer when venv is set up
* Fix call
* Remove env marker
Fixes#8700
Now that `snapcraft remote-build` truly uses new builds for each call, we can split the builds to have a dedicated Azure job for each target architecture. This PR does that.
* Split snap_build job on each architecture
* Also parallelize the publish_snap jobs over each architecture
Fixes#8661.
As mentioned in https://github.com/certbot/certbot/issues/8661#issuecomment-806168214, there are quite a few remaining references, but until we modify the release script, we still need those. The changes here and the list there were created by grepping for the following terms:
```
certbot-auto
cb-auto
cbauto
certbotauto
letsencrypt-auto
le-auto
leauto
letsencryptauto
LEAUTO
LE_AUTO
LETSENCRYPT_AUTO
LETSENCRYPTAUTO
CB_AUTO
CERTBOT_AUTO
CBAUTO
CERTBOTAUTO
```
* Remove references to certbot-auto from certbot code
* Remove references to LEAUTO
* Remove references to CERTBOT_AUTO
* Remove references to letsencrypt-auto
* Remove references to certbot-auto from docs and tools
* remove cli constants header files
* Remove Python virtual environment section
* Upgrade cryptography to 3.4.6
* Fix comment with instructions for how to use hashin
* run tools/rebuild_certbot_constraints.py
* add deps for building cryptography in snaps
* Update cryptography build dependencies for docker
* Update sources for test farm tests
* Remove rust if it's installed for test farm tests
* source bootstrap script and call sudo as needed
We observed recently several unexpected behavior during the execution of snap jobs in Azure. In particular it seems that `snapcraft remote-build` is tending to reattach to the latest builds on Launchpad triggered by the nightly builds on master, independently from the actual branch, status of the code, or targeted architectures.
Primarily if the builds on Launchpad are stalled for some reason, it blocks effectively any other Azure snap jobs until someone manually cancel the builds on Launchpad. Secondarily it means that the outcome of the builds may be inconsistent, because they can be the result of a build for the master source even if you are on a PR that modifieds these sources (including `snapcraft.yaml`).
After digging in `snapcraft` source code, I realized that the signature computed to understand if a build should be resumed, is not based one some hashes against the snapcraft working directory content, but is simply a hash of the working directory absolute path *itself*. It means that every builds triggered from the working directory `/my/path/certbot` for instance, are recognized as the same unique build on Launchpad side, and may be resumed if they already exist, and so independently from the source code, `snapcraft.yaml` or targeted archs.
For the record, relevant parts in `snapcraft` source code:
82024d3748/snapcraft/project/_project.py (L44)82024d3748/snapcraft/project/_project.py (L86-L89)82024d3748/snapcraft/cli/remote.py (L128-L132)
This PR makes effectively the resume build mechanism effectively a noop by moving the source code first in a temporary directory with random name before running `snapcraft remote-build`. This way the signature is never the same and builds are always recognized as brand new builds.
* Invalidate snapcraft remote-build cache by using a temporary workspace.
* Capture one more state in the build
* Precise the certificate naming convention mechanism in a note.
* Add certificate name convention in user guide, refer to it in compatibility page.
* Update certbot/docs/compatibility.rst
Co-authored-by: alexzorin <alex@zor.io>
* Update certbot/docs/using.rst
Co-authored-by: alexzorin <alex@zor.io>
* Update certbot/docs/using.rst
Co-authored-by: alexzorin <alex@zor.io>
* Improve the note about naming conventions
Co-authored-by: alexzorin <alex@zor.io>
While working on #8640, I realized that there were some hidden circular dependencies in certbot._internal.cli package. Then cerbot could break if the order of these imports changes.
This PR fixes that and apply isort on top of the result.
* Kill snapcraft build when a "Chroot problem" is encountered
* Display specific helper for "Chroot problem" status and cancel retry mechanism in this case.
* Isolate build tmp directories
* Configure XDG_CACHE_HOME
* Kill snapcraftctl with chroot problem is encountered
Fixes#8427
This PR converts the Python 2 types hints into Python 3 types annotations. I have used the project https://github.com/ilevkivskyi/com2ann which has been designed for that specific purpose and did that very well.
The only remaining things to do were to fix broken type hints that became wrong code after migration, and to fix lines too long with the new syntax.
* Raw execution of com2ann
* Fixing broken type annotations
* Cleanup imports
There are still some left, but the `modification_check` test fails. Some are still in `tools`, and they can probably be removed as well. `with_statement` was introduced officially in Python 2.5, so there's really old stuff in the code base.
Fixes https://github.com/certbot/certbot/issues/8690.
After this PR, we'll let the release script make its automated changes to certbot-auto as part of the 1.14.0 release and then never make any code changes to certbot-auto ever again!
* disable upgrades on debian
* update test_leauto_upgrades.sh
* update changelog
* revoke: try determine the server automatically
When revoking via --cert-name, use the server from the lineage (unless
overriden by the CLI).
* RenewableCert.storage might be None
* guard against an empty lineage server
* nginx: authenticate all matching vhosts for HTTP01
Previously, the nginx authenticator would set up the HTTP-01 challenge
response on a single HTTP vhost which matched the challenge domain.
The nginx authenticator will now set the challenge response on every
vhost which matches the challenge domain, including duplicates and HTTPS
vhosts.
This makes the authenticator usable behind a CDN where all origin
traffic is performed over HTTPS and also makes the authenticator work
more reliably against "invalid" nginx configurations, such as those
where there are duplicate vhosts.
* some typos
* dont authenticate the same vhost twice
One vhost may appear in both the HTTP and HTTPS vhost lists. Use a set()
to avoid trying to mod the same vhost twice.
* fix type annotations
* rewrite changelog entry
Fixing #8634. It's my first time contributing to this repository, if there's something wrong please let me know.
Before this fix
```
$ python3 extract_changelog.py 1.12.0
...
### Fixed
* Fixed the apache component on openSUSE Tumbleweed which no longer provides
an apache2ctl symlink and uses apachectl instead.
* Fixed a typo in `certbot/crypto_util.py` causing an error upon attempting `secp521r1` key generation
More details about these changes can be found on our GitHub repo.
```
After this fix
```
$ python3 extract_changelog.py 1.12.0
...
### Fixed
* Fixed the apache component on openSUSE Tumbleweed which no longer provides
an apache2ctl symlink and uses apachectl instead.
* Fixed a typo in `certbot/crypto_util.py` causing an error upon attempting `secp521r1` key generation
More details about these changes can be found on our GitHub repo.
```
There is some code in [`_paths_parser`](ae3ed200c0/certbot/certbot/_internal/cli/paths_parser.py (L17-L34)) which has the effect of varying the value type of `config.cert_path` and `config.key_path` based on the CLI verb. When the verb is `revoke`, the type is a tuple `(path: str, contents: bytes)`, otherwise it is a single `str` representing the file path. (I wasn't able to find a written reason as to why it works this way).
This commit removes that special `revoke` case and ensures it is always a `str`.
Why change it now?
I am trying to write some changes and there's some code in `cert_manager` which only works if the verb is `revoke`, you hack `config.cert_path` to be a tuple beforehand, or you [(not actually in `master`) try sniff for the value type](49911afaa6/certbot/certbot/_internal/cert_manager.py (L224-L227)). I have a bad feeling about such workarounds. I would prefer to just make these variables simpler to use, but I'm open to opinions.
In addition to the test suites, I've manually tested `revoke` (including by `--key-path`) and `install`. Are there other places I may have missed?
Unblocks #8636 and #8671.
* docs: rewrite "Revoking certificates"
- `--cert-name` is supported since a long time ago
- `--delete-after-revoke` is default
- Mention that non-default `--server` must be specified
- Document difference between acme key/cert key revocation methods
- Reshuffle text to keep more important things earlier
* minor edits
* remove revocation note
* remove "preauthorization" revocation method
* rewrite deletion note
Fixes#8680.
We seem to have no existing testing code anywhere in this vicinity, so figured I'd get this up quickly then work on that. Manual tests (renew staging certificate, should allow it; renew non-staging cert as staging, should error) passed.
* Remove check for 'fake' in issuer name when renewing certs
* Change fake issuer name to make sure we're not relying on it anywhere
This PR deprecates the certbot-auto specific CLI flags, in the perspective of removing them in a future release as said in #8483.
* Deprecate certbot-auto specific flags
* Update changelog
* Clean tests
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Since Ubuntu 18.04 there is python3-certbot-apache which should be the recommended version.
The Debian package names should probably be updated accordingly.
Fixes https://github.com/certbot/certbot/issues/8494.
I left the `six` dependency pinned in `tests/letstest/requirements.txt` and `tools/oldest_constraints.txt` because `six` is still a transitive dependency with our current pinnings.
The extra moving around of imports is due to me using `isort` to help me keep dependencies in sorted order after replacing imports of `six`.
* remove some six usage in acme
* remove six from acme
* remove six.add_metaclass usage
* fix six.moves.zip
* fix six.moves.builtins.open
* six.moves server fixes
* 's/six\.moves\.range/range/g'
* stop using six.moves.xrange
* fix urllib imports
* s/six\.binary_type/bytes/g
* s/six\.string_types/str/g
* 's/six\.text_type/str/g'
* fix six.iteritems usage
* fix itervalues usage
* switch from six.StringIO to io.StringIO
* remove six imports
* misc fixes
* stop using six.reload_module
* no six.PY2
* rip out six
* keep six pinned in oldest constraints
* fix log_test.py
* update changelog
* Update cli.ini
Sharing back some extended examples I desired, did not find, and derived on my own
* Update cli.ini
Alex,
ok - simplified as requested
Matt
* Update cli.ini
removed trailing quote on line 32
* Update certbot/examples/cli.ini
Co-authored-by: alexzorin <alex@zor.io>
* Update certbot/examples/cli.ini
Co-authored-by: alexzorin <alex@zor.io>
* Update certbot/examples/cli.ini
Co-authored-by: alexzorin <alex@zor.io>
* remove stray newline
Co-authored-by: alexzorin <alex@zor.io>
Fixes https://github.com/certbot/certbot/issues/7913.
I only added the deprecation warning to `certbot.tests.util` because that's the only place where I think someone could be using the `mock` module through our API.
* remove external mock from acme
* update Certbot's mock usage
* remove mock dependency in plugins
* remove external mock from compatibility test
* add changelog entry
* add amazon linux to auto targets
* disable updates outside of debian and rhel
* test certbot-auto with disabled upgrades
* try new approach to testing
* remove bad space
* tweak error text
* add changelog entry
* fix bad certbot-auto commit
* test new error text
* update changelog
* update error text
* Remove deprecated options as early as possible using an explicit list
* add deprecated options to cli init import list
* use correct dict comprehension syntax for py3
* lint
* add test for renewal reconstitution code
* add test to ensure we're not saving deprecated values
* comment code
Fixes#8389#8584.
This PR makes the necessary modifications to officially drop Python 2 support in the Certbot project.
I did not remove the specific Python 2 compatibility branches that has been added in various places in the codebase, to reduce the size of this PR and this will be done in a future one
* Update classifiers and python_requires in setup.py
* Remove warnings about Python 2 deprecation
* Remove Azure jobs on Python 2.7
* Remove references to python 2 in documentation
* Pin dnspython to 2.1.0
* Update changelog
* Remove warning ignore
Fixes https://github.com/certbot/certbot/issues/8580.
With this PR, it should now be possible to run the oldest tests natively on Linux, at least when using an older version of Python 3, which hasn't been possible in a long time. Unfortunately, this isn't possible on macOS which I opened https://github.com/certbot/certbot/issues/8589 to track.
You can see the full test suite running with these changes at https://dev.azure.com/certbot/certbot/_build/results?buildId=3283&view=results.
I took the version numbers for the packages I updated by searching for the oldest version of the dependency I think we should try and support based on the updated comments at the top of `oldest_constraints.txt`. While kind of annoying, I think it'd be a good idea for the reviewer to double check that I didn't make a mistake with the versions I used here.
To find these versions, I used https://packages.ubuntu.com, https://packages.debian.org, and a CentOS 7 Docker image with EPEL 7 installed. For the latter, not all packages are available in Python 3 yet (which is something Certbot's EPEL package maintainers are working on) and in that case I didn't worry about the system because I think they can/will package the newest version available. If they end up hitting any issues here when trying to package Certbot on Python 3, we can always work with them to fix it.
* remove py27 from oldest name
* update min cryptography version
* remove run_oldest_tests.sh
* upgrade setuptools and pyopenssl
* update cffi, pyparsing, and idna
* expand oldest_constraints comments
* clarify oldest comment
* update min configobj version
* update min parsedatetime version
* quote tox env name
* use Python 3.6 in the oldest tests
* use Python 3.6 for oldest integration tests
* properly pin asn1crypto
* update min six version
* set basepython for a nicer error message
* remove outdated python 2 oldest constraints
* Minor fix to logging message
the `if socket_kwargs` will always evaluate to `true`.
* Update acme/acme/crypto_util.py
Co-authored-by: alexzorin <alex@zor.io>
* --preferred-chain: only match root name
Currently, when certbot is given the `--preferred-chain='Some Name'`
flag, it iterates through all alternate chains offered by the ACME
server until it finds any certificate which has `'Some Name'` as its
Issuer Common Name. Unfortunately, this means that if the desired
alternate chain is a strict subset of any earlier chain (e.g. the
default chain is 'EE <-- Int <-- Root1 <-- Root2', but the desired
chain is 'EE <-- Int <-- Root1'), there is no name which can be
provided by the user which will allow the client to select the desired
chain.
This change makes it so that the `find_chain_with_issuer` logic only
cares about the Issuer Common Name found in the last certificate in
each chain. In the example above, the user would then be able to get
their desired chain by specifying `--preferred-chain='Root1'`: although
that name appears in the default chain, it does not appear in the
highest certificate of that chain.
This change is technically backwards-incompatible. However, the only
advice that has been given to users of certbot (and the only usecase
that we believe has existed so far) involved setting the flag to a
value that is the name of a root, not an intermediate, so we don't
expect any real-world configurations or use-cases to be broken.
Fixes#8577
* Update interfaces.py
* test: certbot-ci crash due to no p521 on boulder
The bugfix in #8598 added an integration test to request a certificate
for an EC P-521 key, which is unsupported when ACME_SERVER=boulder,
failing our nightly integration tests.
* add an integration test for all EC curves
Using `tools/offline-sigrequest.sh` is annoying. A while ago I looked into how we could use our yubikeys for our Windows code signing signatures and in the process of doing that learned how to use them for the certbot-auto signature. The certbot-auto signature won't be needed once https://github.com/certbot/certbot/issues/8526 is resolved and we've implemented that plan which will hopefully be in 2-3 months, but despite that, doing this still felt worth it to me.
The script still defaults to using `tools/offline-sign.sh`, but you can set an environment variable to use the yubikey instead. I tested both branches here and it worked.
* Fix EC curve name typo in crypto_util
Fix typo of secp521r1 in crypto util module.
- secp521r1 is to be supported by certbot, but a typo of "SECP521R1" in the input validation section of the make_key function results in an error being thrown
* Add myself to authors.md
Add myself to authors.md ^^
* Add test for secp521r1 key generation
Add test for secp521r1 key generation to cli-tests
For some time, SUSE distributions have had both an apachectl
executable and an apache2ctl compat symlink so both could be used
but apachectl is preferred since that's the official upstream name.
This is currently the case in SLE 15 SP2 and openSUSE Leap 15.2
(and every release since SLE 12 SP1)
OTOH, openSUSE Tumbleweed removed the apache2ctl compat symlink
some weeks ago and both SLE/Leap will follow in one of the next
releases so it's better to change certbot to use the official name,
apachectl.
* clean up some Sphinx warnings
* first attempt at a doc-test pipeline job
* fix formatting
* fix test name
* set env for bash
* try bash vs script
* maybe it didn't like me setting 'PATH'...derp
* drop use of venv
* sphinx-build isn't a py script
* try activating venv
* docs: remove unused html_static tags
* clean up final sphinx build errors for certbot
* clean up final sphinx build errors for acme
* better names for docs pipeline
* fix spelling
* add docs_extras to setup.py
* remove temp doc-testing pipeline; add template to main.yml
* rearrange pipeline execution; run sphinx builds in one job
* add documentation note to compat.os
* add uninstall.rst as a sub-toctree to avoid build error
Now that we have a new pipstrap script with recent version of pip, dependencies for Windows can be resolved correctly on Python 3.8.
This PR enables tests on Python 3.8, and package Certbot for Windows on Python 3.8 also. I do not move up to Python 3.9 since some dependencies (`cryptography`, `pynacl`) do not provide wheels for Python 3.9 yet on Windows, which would require a complete C++ build system to compile them.
* Enable windows tests on Python 3.8 and package it on Python 3.8 also.
* Upgrade pynsist, nsis and pywin32, remove old workarounds
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Fix plugin param type in updater
The command used to do this was:
sed -i 's/\(:type .*plugins:\) `list` of `str`/\1 certbot._internal.plugins.disco.PluginsRegistry/g' certbot/certbot/_internal/updater.py
* fix plugin param type in main.py
The command used to do this was:
sed -i 's/\(:type .*plugins:\) `list` of `str`/\1 plugins_disco.PluginsRegistry/g' certbot/certbot/_internal/main.py
The method `os.readlink()` has a significant behavior change with Python 3.8+ on Windows.
Starting with this version, it will return the resolved path in its "extended-style" form unconditionally, a form which allows to use more than 259 characters in a Windows path, and its string representation is prepended with "\\\\?\\".
See https://docs.microsoft.com/fr-fr/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#maximum-path-length-limitation
Problem is that `os.readlink()` does it for any path, including paths that could be represented with the normal form. As a consequence, any string comparison with a path provided in the normal form will fail even if it represents the same path. This makes Certbot partially break on Windows with Python 3.8.
My proposition in this PR is to forbid `os.readlink()`, and provide `certbot.compat.filesystem.readlink()` which serves the same purpose at resolving the pointed path of a link, and has a consistent behavior over supported Python versions.
* Forbid os.readlink()
* Use readlink
* Raise error with long paths on Windows
* Add unit tests
* Update certbot/certbot/compat/filesystem.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
[As discussed in Mattermost](https://opensource.eff.org/eff-open-source/pl/yhtp4qu4zpfczm5wxmzxhndrto), our Apache test farm tests are failing because the CA certificate in the old version of boulder we have pinned expired over the weekend. This PR fixes that by running a local Pebble instance instead of an external boulder instance.
* switch from external boulder to local pebble
* add --http-01-port to run_acme_server
* Edit certs -> certificates in user-facing text.
To reduce confusion, we should consistently use the full term.
* Edit certs->certificates in more user-facing text.
* fix failing lint (line too long)
* fix typo
Co-authored-by: Jacob Hoffman-Andrews <github@hoffman-andrews.com>
Co-authored-by: Alex Zorin <alex@zorin.id.au>
* Fix TTL mismatch leading to HTTP 412
This PR is a follow up from #8521 where we address the
issue of potentially having a mismatch of TTL when executing
a DNS change (transaction = deletion + additions). Let's say
we have a record `foo.org 30 IN TXT foo-content` with TTL 30s,
when creating challenge or cleaning we might need to perform
a deletion operation in the transaction. Currently certbot
would ask Google API to delete the foo record like this:
`foo.org 60 in TXT foo-content` ignoring the record's original
TTL and using 60s instead. This leads to HTTP 412 as Google would
expect a perfect match of what we want to delete with what it is
on the DNS. See also #8523
* remove ttl from default data to avoid confusions
* Refactor tests and add a missing case
This commit adds a test that covers the case when we are
deleting a TXT record which contains a single rrdatas. Also,
refactoring a couple of tests.
* Make get_existing_txt_rrset documentation more precise about return value
* Add missing assertions in tests.
* fix linting issues
* Mention fix on changelog
* Explain fix around user impact
* Explain what happens when no records are returned
* Update certbot/CHANGELOG.md
* Update certbot/CHANGELOG.md
* Added note to each DNS documentation index page to mention that plugins need to be installed and are not included as standard.
* Resolved issue with white space in doc files
* Changed wording as discussed in PR.
* Changing URL to new wildcard instructions link
* Update certbot-dns-cloudflare/certbot_dns_cloudflare/__init__.py
* update_account: print correct message for -m ""
When -m "" was passed on the CLI, Certbot would print that it updated
the email to '' (an empty string) rather than printing that it removed
the contact details.
This commit also refactors the update_account tests to be a bit more
modern.
* use addCleanup instead of tearDown in tests
* Fix fetch of existing records from Google DNS
There has been many complaints regarding `certbot_dns_google` plugin
failing with:
* HTTP 412 - Precondition not met
* HTTP 409 - Conflict
See #6036. This PR fixes that situation. The bug lies on how we
fetch the TXT records from google. For large amount of records
the Google API paginates the result but we ignore the subsequent
pages and assume that if the record is not in the first response then
it doesn't exist. This leads to either HTTP 409, or HTTP 412 or both.
In this PR we leverage the use of filters on the API to get exactly
the records we are looking for. Apart from fixing the problem stated
above, it has the extra benefit of making the process faster by
reducing the amount of API calls and it doesn't require us to handle
any pagination logic
* Explain changes on CHANGELOG
* Edit AUTHORS.md
* make execute static
* Update certbot/CHANGELOG.md
Being more specific for which plugin this fix bug is meant for.
Co-authored-by: alexzorin <alex@zor.io>
* Fix if expression to be more python-idiomatic
Co-authored-by: alexzorin <alex@zor.io>
* Sort AUTHORS.md
* Simplify tests
Make rrs_mock modeling simpler and refactor
* Revert "Simplify tests"
This reverts commit 9de9623ba7466bf76a7d9075d4eba6980cbe0b62.
* Reimplement conditional mock
We still want to use a conditional mock by make it more
simple to understand by using MagicMock.
* Revert "Sort AUTHORS.md"
This reverts commit b3aa35bcf16f393b2e08ca22278d4c0cfe6c7282.
* Add name in AUTHORS.md
Co-authored-by: alexzorin <alex@zor.io>
In 96a05d9, mypy testing was added to certbot-ci, but introduced an
undeclared dependency on acme.magic_typing, resulting in a crash when
run under the integration-external tox environment.
This change uses the typing module in certbot-ci in place of
acme.magic_typing. It is already provided via dev_constraints.
Fixes https://github.com/certbot/certbot/issues/8519.
I left the `certbot-auto` docs in `install.rst` to avoid breaking links and to help propagate information about our changes there. I moved it closer to the bottom of the doc though since I think our documentation about OS packages and Docker is more helpful to most people.
* clean up certbot-auto docs
* add more info to changelog
* remove more certbot-auto references
Fixes#8256
First let's sum up the problem to solve. We disabled the build isolation available in pip>=19 because it could potential break certbot build without a control on our side. Basically builds are not reproductible. Indeed the build isolation triggers build of PEP-517 enabled transitive dependencies (like `cryptography`) with the build dependencies defined in their `pyproject.toml`. For `cryptography` in particular these requirements include `setuptools>=40.6.0`, and quite logically pip will install the latest version of `setuptools` for the build. And when `setuptools` broke with the version 50, our build did the same.
But disabling the build isolation is not a long term solution, as more and more project will migrate on this approach and it basically provides a lot of benefit in how dependencies are built.
The ideal solution would be to be able to apply version constraints on our side on the build dependencies, in order to pin `setuptools` for instance, and decide precisely when we upgrade to a newer version. However for now pip does not provide a mechanism for that (like a `--build-constraint` flag or propagation of existing `--constraint` flag).
Until I saw https://github.com/pypa/pip/issues/9081 and https://github.com/pypa/pip/issues/8439.
Apart the fact that https://github.com/pypa/pip/issues/9081 shows that pip maintainers are working on this issue, it explains how pip works regarding PEP-517 and infers which workaround can be used to still pin the build dependencies. It turns out that pip invokes itself in each build isolation to install the build dependencies. It means that even if some flags (like `--constraint`) are not explicitly passed to the pip sub call, the global environment remains, in particular the environment variables.
Thus it is known that every pip flag can alternatively be set by environment variable using the following pattern for the variable name: `PIP_[FLAG_NAME_UPPERCASE]`. So for `--constraint`, it is `PIP_CONSTRAINT`. And so you can pass a constraint file to the pip sub call through that mechanism.
I made some tests with a constraint file containing pinning for `setuptools`: indeed under isolation zone, the constraint file has been honored and the provided pinned version has been used to build the dependencies (I tested it with `cryptography`).
Finally this PR takes advantage of this mechanism, by setting `PIP_CONSTRAINT` to `pip_install`, the snap building process, the Dockerfiles and the windows installer building process.
I also extracted out the requirements of the new `pipstrap.py` to be reusable in these various build processes.
* Use workaround to fix build requirements in build isolation, and renable build isolation
* Clean imports in pipstrap
* Externalize pipstrap reqs to be reusable
* Inject pipstrap constraints during pip_install
* Update docker build
* Update snapcraft build
* Prepare installer build
* Fix pipstrap constraints in snap build
* Add back --no-build-cache option in Docker images build
* Update snap/snapcraft.yaml
* Use proper flags with pip
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
This PR adds a `--timeout` flag to `tools/snap/build_remote.py` in order to fail the process if the time execution reaches the provided timeout. It is set to 5h30 on the relevant Azure job, while the job itself has a timeout of 6h managed on Azure side. This allows a slightly better output for these jobs when the snapcraft build stales for any reason.
This PR proposes an alternative configuration for the snap build that avoid the need to use `--system-site-package` when constructing the virtual environment in the snap.
The rationale of `--system-site-package` was that by default, snapcraft creates a virtual environment without `wheel` installed in it. However we need it to build the wheels like `cryptography` on ARM architectures. Sadly there is not way to instruct snapcraft to install some build dependencies in the virtual environment before it kicks in the build phase itself, without overriding that entire phase (which is possible with `parts.override-build`).
The alternative proposed here is to not override the entire build part, but just add some preparatory steps that will be done before the main actions handled by the `python` snap plugin. To do so, I take advantage of the `--upgrade` flag available for the `venv` module in Python 3. This allows to reuse a preexisting virtual environment, and upgrade its component. Adding a flag to the `venv` call is possible in snapcraft, thanks to the `SNAPCRAFT_PYTHON_VENV_ARGS` environment variable (and it is already used to set the `--system-site-package`).
Given `SNAPCRAFT_PYTHON_VENV_ARGS` set to `--upgrade` , we configure the build phase as follows:
* create the virtual environment ourselves in the expected place (`SNAPCRAFT_PART_INSTALL`)
* leverage `tools/pipstrap.py` to install `setuptools`, `pip`, and of course, `wheel`
* let the standard build operations kick in with a call to `snapcraftctl build`: at that point the `--upgrade` flag will be appended to the standard virtual environment creation, reusing our crafted venv instead of creating a new one.
This approach has also the advantage to invoke `pipstrap.py` as it is done for the other deployable artifacts, and for the PR validations, reducing risks of shifts between the various deployment methods.
Although Certbot is a classic snap, it shouldn't load Python code from
the host system. This change prevents packages being loaded from the
"user site-packages directory" (PEP-370). i.e. Certbot will no longer
load DNS plugins installed via `pip install --user certbot-dns-*`.
This adds a 'Error parsing credentials file ...' wrapper to any errors
raised inside certbot-dns-google's usage of oauth2client, to make it
obvious to the user where the problem lies.
* cli: clean up `certbot renew` summary
- Unduplicate output which was being sent to both stdout and stderr
- Don't use IDisplay.notification to buffer output
- Remove big "DRY RUN" guards above and below, instead change language
to "renewal" or "simulated renewal"
- Reword "Attempting to renew cert ... produced an unexpected error"
to be more concise.
* add newline to docstring
Co-authored-by: ohemorange <ebportnoy@gmail.com>
Co-authored-by: ohemorange <ebportnoy@gmail.com>
Fixes https://github.com/certbot/certbot/issues/8495.
To further explain the problem here, `modify_kwargs_for_default_detection` as called in `add` is simplistic and doesn't always work. See https://github.com/certbot/certbot/issues/6164 for one other example.
In this case, were bitten by the code d1e7404358/certbot/certbot/_internal/cli/helpful.py (L393-L395)
The action used for deprecated arguments isn't in `ZERO_ARG_ACTIONS` so it assumes that all deprecated flags take one parameter.
Rather than trying to fix this function (which I think can only realistically be fixed by https://github.com/certbot/certbot/issues/4493), I took the approach that was previously used in `HelpfulArgumentParser.add_deprecated_argument` of bypassing this extra logic entirely. I adapted that function to now call `HelpfulArgumentParser.add` as well for consistency and to make testing easier.
* Rename deprecated arg action class
* Skip extra parsing for deprecated arguments
* Add back test of --manual-public-ip-logging-ok
* Add changelog entry
(cherry picked from commit 5f73274390)
Fixes https://github.com/certbot/certbot/issues/8495.
To further explain the problem here, `modify_kwargs_for_default_detection` as called in `add` is simplistic and doesn't always work. See https://github.com/certbot/certbot/issues/6164 for one other example.
In this case, were bitten by the code d1e7404358/certbot/certbot/_internal/cli/helpful.py (L393-L395)
The action used for deprecated arguments isn't in `ZERO_ARG_ACTIONS` so it assumes that all deprecated flags take one parameter.
Rather than trying to fix this function (which I think can only realistically be fixed by https://github.com/certbot/certbot/issues/4493), I took the approach that was previously used in `HelpfulArgumentParser.add_deprecated_argument` of bypassing this extra logic entirely. I adapted that function to now call `HelpfulArgumentParser.add` as well for consistency and to make testing easier.
* Rename deprecated arg action class
* Skip extra parsing for deprecated arguments
* Add back test of --manual-public-ip-logging-ok
* Add changelog entry
* Don't deprecate certbot-auto quite yet
* Remove centos6 test farm tests
* undo changes to test farm test scripts
(cherry picked from commit e5113d5815)
* nginx: fix py2 unicode sandwich
The nginx parser would crash when saving configuraitons containing
Unicode, because py2's `str` type does not support Unicode.
This change fixes that crash by ensuring that a string type supporting
Unicode is used in both Python 2 and Python 3.
* nginx: add unicode to the integration test config
* update CHANGELOG
Fixes https://github.com/certbot/certbot/issues/8134.
* Test on Python 3.9.
* Mention Python 3.9 support in changelog.
* s/\( *'Pro.*3\.\)8\(',\)/\18\2\n\19\2/
* undo changes to tox.ini
* Move more tests to Python 3.9
* Update PyYAML and packages which pinned it back
* Upgrade typed-ast
* Use <= to "pin" dnspython
* Fix lint by telling pylint it cannot be trusted
* Disable mypy on RFC plugin
* add comment about <= support
* tests: add certbot-dns-rfc2136 integration tests
* dont use 'with' form of socket.socket
fixes py2 crash
* address some feedback:
- conftest: make DNS server a global resource
- conftest: add dns_xdist parameter into node config
- conftest: add --dns-server=bind flag
- conftest: if configured, point the ACME server to the DNS server
- dnsserver: make it sort-of compatible with xdist (future-proofing)
- context: parameterize dns-rfc2136 credentials file (future proofing)
- context: reduce dns-rfc2136 propagation time to speed up tests
- tox: add a integration-dns-rfc2136 target
- rfc2136: add a test/zone for subdelegation
- rfc2136: skip tests if no DNS server is configured
* try add integration-dns-rfc2136 to CI
* mock recursive dns via RPZ
* update --dns-server args and tox.ini args
* address more feedback:
- dns_server: rename rfc2136 creds file to .tpl
- dns_server: dont vary dns server port, instead we will vary zone names (#8455)
- dns_server: log error if bind9 fails to stop cleanly
- dns_server: replace assert with raise
- context: remove redundant _worker_id
- context: remove redundant cleanup override
- context: fix seek/flush in credentials context manager
- context: rename skip_if_no_server -> ...bind_server
- context: add newline EOF
* conftest: document _setup_primary_node sideeffects
* ci: rfc2136-integration from standard->nightly
* fix _stop_bind (function was renamed to stop)
* ignore errors from shutil.rmtree during cleanup
* dns_server: check for crash while polling
* remove --dry-run from rfc2136 test
* cli: improve Obtaining/Renewing wording
* dont use logger, and use new phrasing
* .display_util.notify: dont wrap
As this function is supposed to be an analogue for print, we do not want
it to wrap by default.
Fixes#7717
This PR adds a `--dns-server` option to the `run_acme_server` test tool, in order to provide an arbitrary DNS server to Pebble or Boulder for the integration tests.
I also take this occasion to make `run_acme_server` a real CLI tool using argparse, and set the `--server-type` (default `pebble`) option as well.
* Set --dns-server flag in run_acme_server
* Default to pebble
* Add documentation
* Configure also Boulder
Do we have any specific reason to run the standard Linux integration tests on Python 2.7?
If not, we should move to a more recent version of Python. This PR does it for Python 3.8.
Fixes#8365
This PR adds a control when `certbot certonly` or `certbot run` are called for a certificate that already exists and would eventually be replaced. As described in #8365, this control is here to ensure that the user will not modify the key type of their certificate (eg. ECDSA to RSA) without an explicit approval (set explicitly `--cert-name` and `--key-type`), since RSA is the default if not specified.
* Handle unexpected key type migration.
* Update certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* tests: fix leaking patch in eff_test.py
* tests: PrintTest->NotifyTest in .display.util
The function was renamed during #8432. This change renames the test as
well.
* IDisplay.notification: add `decorate` param.
The flag allows the caller to control whether the message will be
printed in a decorated way (wrapped by hlines) or in an undecorated
way (similar to print).
It is set to true by default, to reflect the existing behavior of the
function.
* IDisplay.notification: write message to debug log
In the same vein as IReporter, this ensures that all notifications which
are shown to the user also make an appearance in the debug log, which
will aid in troubleshooting.
* restore accidentally deleted newline in decoration
* add helper function for printing status messages
* register: use notify rather than logger
Undoes the change in #8393 in favor of the new helper
* comment .display and ._internal.log
Describing when it is suitable to use each
* add more comments to log.py
* make IDisplay.notification decorate arg private
* rename notify->print and move to .display.util
* rename .display.print back to .display.notify
because linters complain about print being a redefined builtin
* Add a new, simplified version of pipstrap.
* Use tools/pipstrap.py
* Uncomment code
* Refactor pip_install.py and provide hashes.
* Fix test_sdists.sh.
* Make code work on Python 2.
* Call strip_hashes.py using Python 3.
* Pin the oldest version of httplib2 used in distros
* Strip enum34 dependency.
* Remove pip pinnings from dev_constraints.txt
* Correct pipstrap docstring.
* Don't set working_dir twice.
* Add comments
* Remove python_version from mypy.ini.
* Fix magic_typing
* Ignore msvcrt usage.
* make mypy happier
* clean up changes
* Add type for reporter queue
* More mypy fixes
* Fix pyrfc3339 str.
* Remove unused import.
* Make certbot.util mypy work in both Pythons
* Fix typo
While reviewing https://github.com/certbot/certbot/pull/8404, it occurred to me that we're keeping both the generated files and the script used to generate them in `git`. Keeping both around seems unnecessary and is almost asking for the files to get out of sync at some point in the future. I fixed that by removing the files, adding them to `.gitignore`, and updating `build_remote.py` to generate them as needed.
* Remove generated files.
* Add generated files to gitignore.
* Reuse generate_dnsplugins_all.sh in build_remote
While working on https://github.com/certbot/certbot/issues/8400, I noticed our Fedora AMIs are quite out of date. I considered updating them and what we could do to avoid the AMIs becoming so out-of-date in the future, but I think we don't actually need these tests.
I pulled a new count of Certbot users by OS and we have less than 7,000 Fedora users meaning only ~0.26% of Certbot users run Fedora. (I think Fedora is a popular desktop OS, but not as popular of a server OS which is where Certbot normally runs.)
Also, Certbot is regularly updated on Fedora including Fedora Rawhide or the rolling release version of Fedora which is similar to Debian sid/unstable. Rawhide changes far too frequently for it to make sense for us to run tests there in my opinon, but that also means that many problems such as Certbot's unit tests failing to run because of Fedora changes will be caught there by our Fedora maintainers before we'd even see it. This is how https://github.com/certbot/certbot/issues/7106 became an issue and how I learned [Certbot worked on Python 3.9 before we could run tests on it](https://github.com/certbot/certbot/issues/8134#issuecomment-655106169).
Because of all this, I think we should just simplify things and remove these tests. If a problem arises in the future, we can always add them back.
Fixes#8409.
Change the line in the README to allow `sudo /snap/bin/lxd.migrate -yes` to fail (for example, if there's nothing to migrate), but the whole command to succeed.
I tested this on a clean Focal install and confirmed it works.
Fixes https://github.com/certbot/certbot/issues/8400.
I had to switch the package installed in `apacheconftest` to `libapache2-mod-wsgi-py3` because Ubuntu 20.10 removed the Python 2 version of this module.
I didn't add this AMI to `tests/letstest/auto_targets.yaml` because like Ubuntu 20.04, `certbot-auto` has never worked on the OS.
* Add Ubuntu 20.20 test farm tests
* Try Python 3 WSGI
This PR adds the following documentation improvements to fix https://github.com/certbot/certbot/issues/7958:
- Simplify building external plugins
- Separate out certbot snap instructions from plugin instructions
- Mention that dnsimple is just an example for the plugin instructions
- Mention remote build for other architectures
- Mention snap doc exists elsewhere in developer guide (`contributing.rst`)
* Set up generate_dnsplugins_all.sh for all files and parametrize snapcraft and postrefreshhook files
* Create constraints file in the generate_dnsplugins_all script
* Separate out plugin and certbot snaps and update instructions
* Add remote build instructions
* Add pointers to the README to contributing.rst
Fixes#8355
During the troubleshooting of #8355, I came to the conclusion that using buildkit was creating the problem. Without it all docker images are built correctly. Initially buildkit was enabled to avoid a building problem in Azure Pipeline, but I also found in my recent tests that this problem was not there anymore.
You can find more details about the troubleshooting and reasoning in #8355.
As a consequence, I disable the usage of buildkit in this PR which will solve the issue.
Fixes#8202
This PR adds an Azure Pipeline job to execute certbot plugins --prepare for each Docker image created during the CI on amd64.
* Prepare basic integration tests for certbot dockers
* Add a displayName for the integration tests task
* Add timeout to DNS query function calls
* Modify tests to account for new timeout variable
* Add change to CHANGELOG
* Add `dns.exception.Timeout` to exception handler
* Move changelog to 1.10.0
Fixes https://github.com/certbot/certbot/issues/8171.
See the comment at the top of the script to learn how to set things up and run this. Running the script between releases will have no effect on our snaps and it should fail when creating the GitHub release. The latter is described at https://github.com/certbot/certbot/pull/8189#discussion_r466707114.
* Rename create_github_release to finish_release
* Add initial version of snap release automation.
* Handle snapcraft login.
* Catch OSError raised when snapcraft doesn't exist.
* Update documentation.
* Only publish the Certbot snap for now.
* Fix typo.
* Document other exceptions.
* Document assertion
* Add status message before getting revisions.
* Publish all snaps.
With more and more of our wildcard instructions on https://certbot.eff.org telling people to use these plugins, I think we should get ready to move our DNS plugins to the stable channel. This PR removes grade: devel so the snap store doesn't prevent us from doing that when we want to. See #8128 where we did this to the Certbot snap for more info.
You can see the snap tests passing with this change at https://dev.azure.com/certbot/certbot/_build/results?buildId=2797&view=results.
Fixes https://github.com/certbot/certbot/issues/8292.
This uses the same approach that worked well for us in https://github.com/certbot/certbot/pull/7926. I'm sure we could delete more code or refactor things here, but I think we should make the most conservative changes we can to certbot-auto until we can just delete the entire thing.
I ran the full test suite on these changes at https://dev.azure.com/certbot/certbot/_build/results?buildId=2773&view=results and manually tested things on OpenSUSE and it worked as expected. certbot-auto refused to create new installations and refused to update old ones while continuing to allow the old version of Certbot to run.
* Deprecate cb-auto outside of Debian and RHEL.
* Don't deprecate Amazon Linux yet.
Partial fix for #8280
This PR refactors the bash script wrapper for snap (`/certbot.wrapper`) into certbot python codebase. Here are the keypoints of this refactoring:
* the wrapping is applied when `main` function from `certbot._internal.main` is called if environment variable `CERTBOT_SNAPPED` is `True`, which is set during the snap build
* the initial bash script wrapper is removed, simplifying `snap/snapcraft.yaml` by removing the `certbot.wrapper` part
* the dependency to `curl` and `jq` binaries are removed
* the failure during requesting the snapd socket is correctly handled, and displays an informative message in order to correct the situation, as required by #8280
One side note about the modifications done to `app.certbot.command` in `snapcraft.yaml`. Normally calling `bin/certbot` should be sufficient and it is effectively under a normal situation (`core` snap up-to-date). However in the same situation than when the problem occurs in #8280, using `bin/certbot` makes the snap raise an exception about `certbot.main` module that cannot be found.
It seems that when `core` snap is not up-to-date (in Debian for instance with default `snapd` installation), the shebang `/usr/bin/env python3` in the `bin/certbot` wrapper is wrongly resolved to the host Python, instead of the snap Python. It is working as expected if `core` snap is up-to-date. One way to fix that is to keep a bash script wrapper, because in this case, it is the `PATH` value that matters to resolve the Python interpreter, and `PATH` is correctly set up to resolve it from the snap first.
However to keep the simplification provided by the wrapper removal, I prefered to use `bin/python3 $SNAP/bin/certbot` as `command` to explicitly target the correct Python interpreter. Again normally it is not needed because everything is working correctly with a `core` snap up-to-date, but since the root purpose of all of this is to target bad situations, well, it is better to have a snap that is effectively able to start to display the informative message...
* Refactor the bash wrapper for snap execution as Python code into certbot
* Remove wrapper, finalize the python logic
* Organize code
* Improve error handling
* Update command
* Setup basic certbot logging before running the snap prepare logic
* Improve instructions
* Use logging facility
* Handle properly an exception in snap_config
* Use the python script call approach
* Update instructions to keep sync with https://github.com/certbot/website/pull/650
This reverts commit feca125437.
Since this change landed, ARM builds for many of the DNS plugins have failed every night. See https://dev.azure.com/certbot/certbot/_build?definitionId=5 or our public Mattermost channel.
I quickly tried to fix this myself and wasn't trivially able to do so. I tried setting `SNAPCRAFT_PYTHON_VENV_ARGS: --system-site-packages` and adding `python3-wheel` as a build dependency, but it didn't work for some reason. The `python3-wheel` package didn't seem to be installed.
I still suspect something like this is the approach we should take, however, I want to fix the failing tests now so things are no longer broken in `master` and those of us on the Certbot team at EFF stop getting spammed with 54 (!!) emails about failed builds from launchpad every night.
Unfortunately, while I was working on this the queue for ARM machines on Launchpad jumped up to an estimated ~20 hour wait, but I confirmed that this fixes the problem by building on an ARM AMI using the instructions at https://github.com/certbot/certbot/blob/master/tools/snap/README.md#use-testing-and-development. If whoever reviews this would like an ARM machine to test on themselves, please let me know.
* add set -e to all bash instances in deploy-stage.yml
* retry uploading snap if we fail
* Add the rest of the set -e calls for bash in azure while we're here
* use retry based on travis_retry
* add set -e to the script: sections that run on macOS/Linux
* actually don't fail on result
* reset result before running command because bash short circuits or conditionals
* remove inapplicable comment
Partial fix for #8256
This PR makes tox calls pipstrap before any commands is executed, and Azure Pipelines calls pipstrap when appropriate (when an actual call to pip is done).
* Invoke pipstrap in tox and during the CI
* Set default value for PYTHON_VERSION and always set python interpreter
* Set Python for snaps_build also
* Fix the build for Windows installer
* Add a warning comment for pinned versions in pipstrap
* Rebuild letsencrypt-auto
* Same version than the installer build
* Let's update to latest pip for installer tests
The ErrorHandler context manager could produce very verbose CLI output
when handling long exception chains (PIP 3134 enhanced reporting).
Rather than logging every exception with its traceback to the CLI, this
commit changes ErrorHandler so that only the final exception in the
chain, without traceback, is logged to the CLI.
This is consistent with a previous change made in the global except
hook (#8000).
* Fixed a few linting warnings for if not x in y.
These should have been caught by pylint, but weren't.
* Replaced "x in y.keys()" with "x in y".
It's much faster, and more Pythonic.
* nginx: reduced CLI logging when reloading nginx
Hides the output of `nginx -s reload` from the CLI, moving it to
debug-level logging.
Additionally, fixes an issue where Certbot did not properly capture the
output of the nginx reload and restart commands.
Fixes#8231
* remove leftover debugging
* reorder CHANGELOG
* don't use bare asserts
random25863.example.org appears in multiple port 80 virtualhosts in the
nginx testdata tarball and also is in the nginx-roundtrip-testdata.
Certbot doesn't handle these properly, which results in random test
failures.
This commit ensures that random25863.example.org only appears in a
single virtualhost and should ensure that the tests pass consistently.
Fixes#7585
This PR removes the specific configuration to configure the test runner included in `setuptools` to use pytest, the deprecated parameters related to setuptools testing in `setup.py`, and update the packaging guide to use `python -m pytest` instead of `python setup.py test`.
The farm test `test_sdist.sh` is also updated to use directly pytest. This test is designed to reproduce the steps used by OS integrators when they package `certbot`, and ensure that we are not breaking something that will impact their work. We discussed with integrators from RHEL/CentOS and Debian, and they are fine with us testing sdist directly with pytest.
One execution of the `test_sdist.sh` farm test with the modifications made by this PR can be seen here: https://dev.azure.com/certbot/certbot/_build/results?buildId=2606&view=results
* Remove setuptools deprecated features about testing
* Updating packaging guide
* Add changelog entry
Fixes https://github.com/certbot/certbot/issues/8162.
I had to update the base of the Dockerfile to get a new enough version of Python 3. I also simplified things a lot and removed a lot of the comments that were essentially just describing how Dockerfiles work.
The most complicated changes here are in `testdata`. You can find a diff of the changes to `nginx.tar.gz` at https://gist.github.com/c7727db0cecf3f15f02439f085c73848.
The first problem was that there were some complaints from the new Apache/nginx/OpenSSL version about the 1024 bit RSA key so I updated `empty_cert.pem` both inside and outside of the tarball as well as the corresponding private key in the tarball to use a 2048 bit key.
The 2nd problem is trickier to understand. If you look at the output from nginx after loading the config from `lots/` you'll see it complaining about conflicting `server_name` directives for the directives I deleted. See https://dev.azure.com/certbot/certbot/_build/results?buildId=2578&view=logs&j=250aa146-b243-5f8f-bf86-17a529c9fb7e&t=9baa2014-9673-5e78-8f4f-7a463caf2bfa&l=1516.
After switching the tests to Python 3, tests on that domain started failing. What I believe to be happening is we were just lucky these tests were passing to begin with. In both the Apache and Nginx plugin, if there are conflicting virtual hosts like this, we just arbitrarily pick one. The relevant code here for nginx is 575092d603/certbot-nginx/certbot_nginx/_internal/configurator.py (L455)
I played around with a debugger and confirmed that before I removed the conflicting server names, there were two exact matches for the domain we were searching for here.
I think all that's going on is with the switch to Python 3, the vhost we happen to choose changes and "breaks" the test. I suspect this to be due to something like getting values out of a dict somewhere where the order of items in a dict while iterating over it is different between Python 2 and 3. I didn't track where this difference happens down, but I personally don't think it's a good use of time since I think the real problem here is that the nginx config being tested was invalid with conflicting `server` blocks.
I removed all references to the `server_name` causing conflicts in that nginx configuration because both server blocks had other domains that are being tested, but I could add either back if you prefer. You can see the `nginx_compat` test passing with these changes at https://dev.azure.com/certbot/certbot/_build/results?buildId=2587&view=logs&j=250aa146-b243-5f8f-bf86-17a529c9fb7e.
* update Dockerfile
* Fix apache_compat on py3.
* Update empty_cert.pem.
The command used here was `openssl req -key
certbot/certbot/tests/testdata/rsa2048_key.pem -new -subj '/CN=example.com'
-x509 >
certbot-compatibility-test/certbot_compatibility_test/testdata/empty_cert.pem`.
* update nginx.tar.gz
* Remove conflicting server_names
This commit fixes an issue with the nginx parser where it would perform
case-sensitive matching against server_name.
This would cause the authenticator and installer to ignore existing
virtualhosts containing uppercase characters, resulting in duplicate
virtualhosts and broken configurations.
"Exact" and "wildcard" matching is now case-insensitive. Regex-based
matching will continue to respect the case mode of the pattern.
Fixes#6776.
I'm adding this comment as part of the resolution of #8251. I think rewriting the script in Python is something we really should only worry about if we're working on the script in the future. Because of this, I personally prefer a code comment rather than an issue here.
In order to avoid potentially breaking changes in the snap CLI on the
host, this commit changes certbot.wrapper to use the snap REST API (via
curl and jq) to list connected Certbot plugins.
* snap: Fix "stack smashing" error in wrapper
certbot.wrapper had implicit dependencies on sed, awk and coreutils,
which were being accidentally provided through the host system. Because
certbot.wrapper modifies LD_LIBRARY_PATH, this was causing some systems
to load an incompatible combination of shared libraries, resulting sed
crashing.
This commit reduces the dependencies of this script to just gawk, and
explicitly stages it as part of the Certbot snap.
It additionally moves invocations of all host system programs to a
moment prior to the modification of LD_LIBRARY_PATH, and the invocation
of snapped programs to after the modification.
Fixes#8245
* snap: Don't modify LD_LIBRARY_PATH
* leftover tracing
* snap: revert curl/jq in wrapper, use gawk for now
Fixes#8252
With @bmw we digged quite a lot on why the failure happens on ARM snap, and here we what we understood:
* the failure occurs since the version 50 of setuptools is available
* normally, we should not be impacted because the setuptools version used in the snap build should be the one installed by the `core20` base snap, because the build occurs in a `venv` created with `--system-site-packages`
* BUT associated with the build isolation provided by recent versions of pip (to implement PEP 517), a bad interaction happens: following the definition of the build system provided by `cryptography`, pip installs the most recent version of setuptools on a separate path for the build (because `cryptography` just asks for a minimal version of `setuptools`), then features of this version conflict with the old version of `setuptools` initially present
* the exact interaction is described here: https://github.com/pypa/pip/issues/6264#issuecomment-685230919. Basically the new version of `setuptools` triggers some hacks, that are then applied at runtime on the old version of `setuptools` that is also still available in `sys.path` at this point, and breaks the build.
To fix that, one can disable the isolation build on cryptography, by passing `PIP_NO_ISOLATION_BUILD=no` to pip. It is the purpose of this PR.
This will have the consequence to not be PEP 517 compliant: if needed the `cryptography` library will be built using the `setuptools` available in the system. In general I think it makes sense for the snap build purpose, since we control precisely the build environment, and makes consistent build that will not be broken by a new version of a build system if library maintainers did not provide a strict version of it in their build requirements. However we need now to take care about having a compatible build system for all libraries that may have specific requirements in their build system using the PEP 517 definition in `pyproject.toml`.
I think as of now that it is a safe move if we keep using the most recent version of `setuptools` available in Ubuntu 20.04, and it is the case here for snap builds. It may however be problematic if some libraries require another build system than `setuptools` and do not provide a fallback to a `setuptools` build. For the record, `dns-lexicon`, that I maintain, uses `poetry` and so a PEP 517 compliant definition of a build system, but provides also this fallback (https://github.com/AnalogJ/lexicon/blob/master/setup.py).
Full test suite compiling the snaps for the 3 architectures using this PR is available here: https://dev.azure.com/certbot/certbot/_build/results?buildId=2596&view=results
Fixing a mistake in pull request #8212 where I recorded my changes in an already released version 😳.
- Moving new changes out of a previous changelog and into the next
releases' changelog
* Allow user to remove email using update command
Fixes#3162. Slight change to control flow to replace current email
addresses with an empty list. Also add appropriate result message when
an email is removed.
* Update ACME to allow update to remove fields
- New field type "UnFalseyField" that treats all non-None fields as
non-empty
- Contact changed to new field type to allow sending of empty contact
field
- Certbot update adjusted to use tuple instead of None when empty
- Test updated to check more logic
- Unrelated type hint added to keep pycharm gods happy
* Moved some mocks into decorators
* Restore default to `contact` but do not serialize
- Add `to_partial_json` and `fields_to_partial_json` to Registration
- Store private variable noting if the value of the `contact` field was
provided by the user.
- Change message when updating without email to reflect removal of
all contact info.
- Add note in changelog that `update_account` with the
`--register-unsafely-without-email` flag will remove contact
from an account.
* Reverse logic for field handling on serialization
Now forcably add contact when serilizing, but go back to base `jose`
field type.
* Responding to Review
- change out of date name
- update several comments
- update `from_data` function of `Registration`
- Update test to remove superfluous mock
* Responding to review
- Change comments to make from_data more clear
- Remove code worried about None (omitempty has got my back)
- Update test to be more reliable
- Add typing import with comment to avoid pylint bug
Fixes https://github.com/certbot/certbot/issues/8165.
I moved `prerequisites` up to the "Running a local copy of the client" `contributing.html#prerequisites` still links to information about installing Cerbot's dependencies.
I left all certbot-auto documentation that wasn't explicitly encouraging its use. I think we can rip that out once the script is deprecated.
* Create bootstrap script
* Delete a whole bunch of the bootstrap script
* modify test_tests to use new script
* put python version checking in back in
* add x
* call the venv creation from inside the bootstrap
* add targets back
* modify test_apache2 to use new format
* shouldn't need virtualenv on rhel
* readd targets
* Update test_sdists to use new script
* move setting up venv back out of script so it's not run with sudo
* take venv3.py call out of bootstrap in all scripts
* add additional python3-devel pkg name
* fix test_sdists
* enable additional rhel7 repos
* clean up code and comments
* Update tests and instructions to use auto_targets.yaml with test_leauto_upgrades.sh and test_letsencrypt_auto_certonly_standalone.sh
* only install python3-devel.x86_64 for rhel7
* Upgrade python version for debian in test_apache2.sh
* don't run test_tests or test_sdists on debian 9 or ubuntu 16.04
* Add 20.04 and 20.04 arm images to targets.yaml
* use pyenv to upgrade to python3.5
* remove arm64 instance because it's having auth trouble
* correct pyenv usage on ubuntu
* add arm64 target to targets.yaml
* replace debian 9 arm64 with ubuntu 20
* don't try to upgrade a perfectly good python version
* let's just add ubuntu20 to apache2_targets while we're here
* uncomment test_apache2
* move adding python3-devel.x86_64 to bootstrap_os_packages to avoid potential race condition
* no need to specify the arch once extra rhel7 repos enabled
* explicitly specify python3
* don't fail if we can't enable rhel7 extras
* capture python36-devel as well
Fixes https://github.com/certbot/certbot/issues/8022, https://github.com/certbot-docker/certbot-docker/issues/25, and https://github.com/certbot-docker/certbot-docker/issues/20.
This PR builds on https://github.com/certbot/certbot/pull/8192 to set up similar builds in Azure to what we currently have at release time as well as nightly builds allowing us to catch problems in these images before a release. It also fully automates our Docker deployments removing a manual step from our release process. We'll need to update our release instructions once this PR lands.
If you're not familiar with our `certbot-docker` setup, you can read about how these scripts customized the build process on Docker Hub at https://docs.docker.com/docker-hub/builds/advanced/.
You can see the process working properly at:
* Nightly build on my fork: https://dev.azure.com/bmw0523/bmw/_build/results?buildId=345&view=logs&j=70ac378a-cb1f-50d1-b328-169807afbcfa
* Release build on my fork: https://dev.azure.com/bmw0523/bmw/_build/results?buildId=346&view=logs&j=70ac378a-cb1f-50d1-b328-169807afbcfa
* Nightly build on Certbot's Azure setup: https://dev.azure.com/certbot/certbot/_build/results?buildId=2426&view=logs&j=70ac378a-cb1f-50d1-b328-169807afbcfa
The builds on my fork pushed to https://hub.docker.com/u/certbottest. The credentials for this account are in our shared vault in 1password if you want to play around with this.
While the scripts will (almost?) always be run in CI, I tested the scripts successfully on macOS, Ubuntu 18.04, and Ubuntu 20.04, however, **the scripts do not seem to work when using the Docker snap, at least on Ubuntu 20.04.** It does work with the `docker.io` packages from `apt`. I was able to make things work by no longer setting `DOCKER_BUILDKIT`, but as I described in the code comments, this breaks things on Azure.
When writing this PR, I tried to make the minimal modifications to our current set up to get the behavior we want. I'm planning on working on splitting the Docker builds into different Azure jobs so it doesn't increase the overall build time, but this isn't trivial so I figured it would be best done in a separate PR.
* Remove license.
* update build scripts
* write deploy code
* Remove unused READMEs.
* rewrite readme
* Make testing on a fork easier.
* Set up Azure automation.
* fix typo
* Make output more verbose.
* clean up cleanup...everybody everywhere
* separate build and deploy
* Document docker-hub credentials
* Use Docker BuildKit when building.
* Remove unneeded .gitignore files.
* Fix tools/docker/README.md grammar
Co-authored-by: ohemorange <ebportnoy@gmail.com>
* Clarify <TAG> in README.
* no docker snap
* rename docker job
Co-authored-by: Erica Portnoy <ebportnoy@gmail.com>
* Improve github release creation process
* Comment file
* Update tools/create_github_release.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* run chmod +x on tools/create_github_release.py
* Add description of create github release method
* remove references to unnecessary azure credential
* remove unnecessary import
* Add reminders to update other file to definitions in .azure-pipelines
* Raise an error if we fail to fetch the artifact from azure
* Create github release as a draft, upload artifacts, then un-draft, for hooks to be run at the right point
* get the version number from the release
* add new packages to dev3_extras so they're installed by tools/venv3.py
* remove unnecessary import
* fun fact: tempdirs behave differently when used as a context manager
* Move comment to construct.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
It seems my old instruction isn't required any longer for Gentoo. To be honest, I don't have a clue since when, but my own Gentoo server isn't even using the workaround mentioned currently in the documentation at the moment. So it seems the Apache plugin works just fine without this workaround 🤦
Also, the Gentoo repository obviously also includes the nginx since a long time. I guess my original text is ancient.. It also includes *one* of the many DNS plugins, with a different maintainer than the other "main" packages. It currently only has version 0.39.0, so I don't have a clue if it's being maintained officially.
* Remove obsolete Gentoo instructions and add packages.
* Capitalize note
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* nginx: add --nginx-sleep-seconds
As described in #7422, reloading nginx is an asynchronous process and
Certbot does not know when it is complete. In an environment where this
reload takes a long time, the nginx plugin suffers from an issue where
it responds to and fails the ACME challenge before the nginx server is
ready to serve it.
Following the discussion in a previous PR #7740, this commit introduces
a new flag, --nginx-sleep-seconds, which may be used to increase the
duration that Certbot will wait for nginx to reload, from its previously
hard-coded value of 1s.
Fixes#7422
* update CHANGELOG
* nginx: update docstring for nginx_restart
Fixes#8169
This PR improves snaps remote builds script by dumping the output of `snapcraft remote-build` when unexpected behavior is detected:
* when all builds for a project finish with a zero status code, and none of them are marked as failed, we expect to have all the associated snap files available locally.
* when some builds are marked as failed, we expect to have a build output for each of them available locally.
In these two situations, if the expectation are not matched, then the script will display the output of `snapcraft remote-build` itself. I added also a control error to handle nicely the absence of an expected build output on the local machine.
* Improve log dump in snaps remote builds when an unexpected behavior is detected
* Use the manager
* Update tools/snap/build_remote.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#7863.
Connect command is `sudo snap connect certbot-dns-dnsimple:certbot-metadata certbot:certbot-metadata`
Logs are `cat /var/snap/certbot-dns-dnsimple/current/debuglog`
Echos in hook are only printed to terminal when it exits 0; otherwise, check logs in `debuglog` mentioned above.
Manual tests include all iterations of connected, unconnected, installed for the first, second time, etc, with passing and failing version checks.
* Make dnsimple not update if certbot is too old
* create an interface to read cb version
* add missing newline
* fix syntax
* trying to figure out the consumer syntax
* trying to figure out the consumer syntax, again
* only check post first install
* valid setting name
* test for first install differently
* snapctl doesn't error if it fails I guess
* time to do some print debugging
* continue playing with syntax
* once again, fooled by bash int vs string comparisons!
* debugging
* if we use post and pre together we can do this
* is this how content interface syntax works
* it's a directory?
* more debug
* what's that error message again?
* try other syntax
* if it's not documented just guess at syntax
* actually, I think this is the syntax
* oops didn't set for new hook
* test passing information along connection
* interface attributes can only be set during the execution of prepare hooks
* just do it with main connection
* undo last few test changes
* Add some printing to make sure we understand what's going on
* create empty directory to bind to
* put mkdir in the correct part
* let's inspect the environment
* it can't run bash directly.
* perhaps only directories can be shared via the contente interface
* update name of folder
* echo to debug log to understand what's going on exactly. we have file access though!
* update grep for new file
* more printing
* echo to the debug log
* ok NOW all print statements are going to the log
* why does echo need two >s
* remove unnecessary extra check, just check if the init file is available
* check if certbot version will be available post-refresh after all
* pre-refresh hook is not necessary to get certbot version
* update mkdir so we don't have to clean each time
* try comparing version numbers in python
* it's python3
* we need different prints for if we succeed or if we fail.
* improve bash syntax
* remove some debugging code
* Remove debug script
* remove spaces for clarity
* consolidate parts and remove more test code
* s/certbot-version/certbot-metadata/g
* use sys.exit instead of exit
* find and save certbot version on the certbot side
* change presence test to new file
* switch to using packaging.version.parse instead of LooseVersion
* switch to requiring certbot version >= plugin version
* add plugin snap changes to generate script
* Add comment to generation file saying not to edit generated files manually
* Create post-refresh hook for all plugins with script
* generate files using new script
* update snapcraft.yaml files for plugins
* bin/sh comes first
* Add packaging to install_requires
* Check that refresh is allowed in integration test
* switch plug and slot names in integration test
* Update tools/generate_dnsplugins_postrefreshhook.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* small bash fixes
* Update snap readme with new instructions
* Run tools/generate_dnsplugins_postrefreshhook.sh
* Update tools/snap/generate_dnsplugins_postrefreshhook.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Snapcraft has a feature name `remote-build`. It allows to compile snaps using the Canonical dedicated build architecture for several architectures. Compared to the QEMU-enabled Docker approach used currently, the remote build has several advantages:
* the builds are done on the native architecture, making them basically faster than what can be achieved on QEMU
* it avoids to depend on `adferrand/snapcraft` (which could be otherwise be fixed with the merge of https://github.com/snapcore/snapcraft/pull/3144, but this will not happen in the short term)
* when everything is good, all snaps build can be run in parallel and then can be orchestrated by one single Azure Pipeline job, since the heavy tasks are done remotely.
This PR makes the necessary ajustements to use the remote build feature instead of the QEMU-enabled docker approach.
One complex task was to be able to compile the `certbot` snap on `arm64` and `armhf`. Indeed on these architectures the pre-compiled wheel for `cffi` is not available. So it needs to be compiled during the snap build. Sadly, the current version of the python plugin in snapcraft is limited by the fact that `wheels` is not installed in the virtual environment set up to build the python packages, and there is no easy way to change that except by overridding the whole build process.
In the long term, I think I will open a PR on `snapcraft` Git repository to provide a consistent solution. But for the short term, I used the possibility to provide arguments to the `venv` module, to add the flag `--system-site-packages`. With it, the virtual environment can use the system site package, where `wheel` is available.
The other significant additions are in `tools/snap/build_remote.py` script. If invoking the remote build on a local machine is quite straight-forward, it is another story on the CI because we need build auditability and resiliency during these non-interactive actions. In particular we should avoid as possible inconsistent results on the nightly pipeline and the release pipeline.
So this script wraps the `snapcraft` call into a retry logic, and improves its logs in the context of parallel builds.
For the minor modifications, it is mainly about ensuring that plugins can be built (some of them also need `cffi` for instance), and simplify the Azure Pipeline since all snaps are retrieved in one go.
Please note that the `test-` branches still run only the `amd64` architecture. Indeed I noticed that builds on `arm64` and `armhf` are tending to be very slow to start (up to 40 min) while the `amd64` ones wait at max 10 mins, and usually 30 seconds only when the overall load on Canonical side is low.
To work on `certbot/certbot` repository, one secured file needs to be added, because `snapcraft` needs to be authenticated against Launchpad with credentials allowing remote builds. To do so, from a local machine that have this capability, one can extract the existing file at `$HOME/.local/share/snapcraft/provider/launchpad/credentials`, and register it as a secured file in Azure Pipeline with the name `snapcraftRemoteBuildCredentials`.
* Define scripts
* Setup pipeline to use remote builds
* Focus on packaging builds
* Set credentials
* Setup git
* Launch all builds in parallel
* Add dev dependencies to build cffi and cryptography
* Convert to a python logic
* Reorganize the pipeline
* Handle the fact that snap builds may be taken from cache
* Generate constraints
* Exit code
* Check existence
* Try to handle better non zero exit code
* Add --system-site-packages to get wheel in the venv
* Add executable permissions
* Troubleshoot
* Dynamic display, take the maximum timeout for snap build job
* Allow retries if the remote build does not start
* Trigger only amd64 builds for test branches
* Exit properly
* Update snapcraft.yaml
* Fix snap run
* Set secured file name
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Move order in deps
* Reactivate all builds
* Use Manager() as a context manager
* Use Pool as a context manager
* Some nice refactorings
* Check snapcraft execution interruption with exit codes
* Use f-string and format expressions
* Start log
* Consistent use of single/double quotes
* Better loop to extract lines
* Retry on build failures
* Few optimizations
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#8149
This PR adds warnings to warn about the incoming deprecation of Python 3.5 in Certbot.
* Add warnings about Python 3.5 deprecation in Certbot
* Update certbot/certbot/__init__.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
«When you add or change DNS zones or records, your changes will now be
reflected at our authoritative nameservers in under 60 seconds. This is
down from the previous “every quarter hour” approach that we had for so
long.» - https://www.linode.com/blog/linode/linode-turns-17/
Fixes#4351
This PR proposes a solution to use the third party plugins with the prefix `pip_package_name:` in the plugin name, plugin specific flags and keys in dns plugin credential files.
A first solution has been proposed in #6372, and a more advanced one in #7026. In #7026 was also added a deprecation warning when the old plugin name `pip_package_name:plugin_name` was used.
However there were some limitations with #7026, in particular the fact that existing flags of type `pip_package_name:dns_plugin_option` or keys like `pip_package_name:key` in dns plugin credential files were not read anymore. This would have led to silent failures during renewals if the configuration was not explicitly updated by the user.
I tried to fix that based on #7026, but the changes needed are complex, and create new problems on their own, like unexpected erasure of values in the renewal configurations.
Instead I try in this PR a new approach: the `PluginsRegistry` in `certbot._internal.plugins.disco` module register two plugins for a given entrypoint refering to a third party plugin when `find_all()` is called:
* one plugin with the name `plugin_name`
* one plugin with the name `pip_package_name:plugin_name` (like before)
This way, every existing configuration continues to work without any change (credentials, renewal configuration, CLI flags). And new configurations can refer to the new plugin name without prefix, and use the approriate CLI flags, credentials without this prefix.
On top of it I added the deprecation path given in #7026 (thanks @coldfix!):
* the plugin named `pip_package_name:plugin_name` is hidden from `certbot plugins` output
* the help for this plugin is still displayed, and a deprecation warning is displayed in the description
* when invoked, the same deprecation warning is displayed in the terminal
* Support both prefixed and not prefix third party plugins
* Adapt tests
* Add deprecation path
* Named parameters
* Add deprecation warning in CLI
* Add a changelog
Fixes#8041
This PR makes Azure Pipeline build the DNS plugins snaps for the 3 architectures during the CI.
It leverages the existing logic for building the Certbot snap in order to deploy a QEMU environment with Docker, and leverages the local PyPI index to speed up the build when installing `cffi` and `cryptography`.
All DNS plugins snaps are constructed in one unique docker container, in order to save the time required to install the system dependencies upon first start of `snapcraft`, and so speed up significantly the build.
Finally, all `amd64` DNS plugins snaps are built within 6 minutes. For `arm64` and `armhf`, it is around 40 mins: this is quite fast in fact, considering that 14 DNS plugins snaps are built.
However, this is still an extremely heavy task to make the full 3 architectures builds, even for Azure Pipelines and its 10 parallel jobs capability. That is why I make the `arm64` and `armhf` builds be skipped for the `full-test-suite`, and let them run only for `nightly` and `release`. This means however that these builds will not be done for the release branches. If this is a problem, I can put a more elaborate suspend condition to triggers the builds in this case.
All snaps are stored in the pipeline artifacts storage, making them available for publication during a `release` pipeline.
The PR is set as Draft for now, because I use temporarily `pr_test-suite` to validate the packaging jobs when commits are pushed. Once the PR is ready, I will revert it back to the normal configuration (run the standard tests).
* Configure a script to build DNS snaps
* Focus on packaging
* Trigger all architectures
* Add extra index
* Prepare conditional suspend
* Set final suspend logic
* Set final suspend value
* Loop for publication
* Use python3
* Clean before build
* Add a test
* Add test job in Azure
* Preserve env
* Apply normal config for pipelines
* Skip QEMU jobs only for test branches
* Makes snap run tests depends also on the Certbot snap build
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/stages/deploy-stage.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* More accurate way to get the plugin snap name
* Integrate DNS snap tests into certbot-ci
* Fixes
* Update certbot-ci/snap_integration_tests/conftest.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update certbot-ci/snap_integration_tests/conftest.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Clean an _init_.py file
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
People who are considering running Certbot with Docker are probably doing so because their webserver is to be run with Docker. These changes to the README should help them to understand that doing so will require knowledge of Docker volumes and that the architectural justification for running Certbot in a separate container is the "one service per container" best practice.
Short PR to improve some things during snap builds:
* cleanup snapcraft assets before a build, in order to avoid some weird errors when two builds are executed consecutively without cleanup
* use python3 explicitly in `tools/simple_http_server.py` because on several recent distributions, `python` binary is not exposed anymore, only `python2` or `python3`.
If you go to a URL like https://snapcraft.io/certbot/releases and try to move the Certbot snap into the candidate or stable channels, you cannot do so. There is a tooltip which says that revisions with the grade devel cannot be promoted to candidate or stable channels.
The documentation for `grade` can be found at https://snapcraft.io/docs/snapcraft-yaml-reference where it says the value is optional and
> Defines the quality grade of the snap.
Type: enum
Can be either devel (i.e. a development version of the snap, so not to be published to the stable or candidate channels) or stable (i.e. a stable release or release candidate, which can be released to all channels)
Example: [stable or devel]
I'm working on a proposal for our next steps for snaps which involves moving the Certbot snap to the stable channel. I of course won't make those changes without giving others a chance to share their opinion, but I'd like to avoid the situation where we're technically unable to move the Certbot 1.6.0 snap to the stable channel despite wanting to do so.
I started to make the same changes to the DNS plugins, but I personally think it's too soon to propose stable versions of those yet and `grade` is a simple way to ensure we don't accidentally promote something there.
You can see the snap being built and run successfully with this change at https://dev.azure.com/certbot/certbot/_build/results?buildId=2246&view=results.
Fixes#8071 and fixes https://github.com/certbot/certbot/issues/8110.
This PR migrates every job from Travis in Azure Pipeline.
This PR essentially converts the Travis jobs into Azure Pipeline with a complete iso-fonctionality (or I made a mistake). The jobs are added in the relevant existing pipelines (`main`, `nightly`, `advanced-test`, `release`). A global refactoring thanks to the templating system is done to reduce greatly the verbosity of the pipeline descriptions.
A specific feature (not present in Travis) is added: the stage `On_Failure`. Using directly the Mattermost API, it allows to notify pipeline failure in a Mattermost channel with a link to the failed pipelines without the need to authenticate to Microsoft.
See https://github.com/certbot/certbot/pull/8098#issuecomment-649873641 for the post merge actions to do at the end of this work.
Fixes#7420.
* Set up CentOS 8 test farm tests
* Don't add to apache2_targets until 7273 is resolved
* Start upgrade test from a version that works on centos 8
* remove when possible from targets
* acme: add support for alternative cert. chains
* certbot: add --preferred-chain
* remove support for issuer SKI matching
* show --preferred-chain in "run" help
* warn if no chain matched and it's not a dry-run
* fix existing failing tests
* add unit, integration tests
* bump acme dependency to dev version
* simplify test to avoid py2.7 recursion bug
* add preferred_chain to STR_CONFIG_ITEMS
* reduce preferred_chain warning to info level
* acme: fix some docstrings in .messages
* certbot: fix docstring in crypto_util
* try to fix certbot-nginx acme dep problem
Fixes#8093.
This PR modifies and audits all uses of `subprocess` and `Popen` outside of tests, `certbot-ci/`, `certbot-compatibility-test/`, `letsencrypt-auto-source/`, `tools/`, and `windows-installer/`. Calls to outside programs have their `env` modified to remove the `SNAP` components of paths, if they exist. This includes any calls made from hooks, calls to `apachectl` and `nginx`, and to `openssl` from `ocsp.py`.
For testing manually, rsync flags will look something like:
```
rsync -avzhe ssh root@focal.domain:/home/certbot/certbot/certbot_*_amd64.snap .
rsync -avzhe ssh certbot_*_amd64.snap root@centos7.domain:/root/certbot/
```
With these modifications, `certbot plugins --prepare` now passes on Centos 7.
If I'm wrong and we package the `openssl` binary, the modifications should be removed from `ocsp.py`, and `env` should be passed into `run_script` rather than set internally in its calls from nginx and apache.
One caveat with this approach is the disconnect between why it's a problem (packaging) and where it's solved (internal to Certbot). I considered a wrapping approach, but we'd still have to audit specific calls. I think the best way to address this is robust testing; specifically, running the snap on other systems.
For hooks, all calls will remove the snap paths if they exist. This is probably fine, because even if the hook intends to call back into certbot, it can do that, it'll just create a new snap.
I'm not sure if we need these modifications for the Mac OS X/ Darwin calls, but they can't hurt.
* Add method to plugins util to get env without snap paths
* Use modified environment in Nginx plugin
* Pass through env to certbot.util.run_script
* Use modified environment in Apache plugin
* move env_no_snap_for_external_calls to certbot.util
* Set env internally to run_script, since we use that only to call out
* Add env to mac subprocess calls in certbot.util
* Add env to openssl call in ocsp.py
* Add env for hooks calls in certbot.compat.misc.
* Pass env into execute_command to avoid circular dependency
* Update hook test to assert called with env
* Fix mypy type hint to account for new param
* Change signature to include Optional
* go back to using CERTBOT_PLUGIN_PATH
* no need to modify PYTHONPATH in env
* robustly detect when we're in a snap
* Improve env util fxn docstring
* Update changelog
* Add unit tests for env_no_snap_for_external_calls
* Import compat.os
Fixes#8103.
* Update the DNS plugin generator script to core20 syntax
* Generate new snapcraft.yamls for the DNS plugins
* Update certbot.wrapper to search for python3.8 paths
Fixes#7957
This PR makes snapcraft generate itself the dependencies constraints file during the snap build, instead of having an external script that does it before calling snapcraft.
This line seems to refer to [augeas.py](https://github.com/hercules-team/python-augeas/blob/v0.5.0/augeas.py) from the version of Augeas we normally have pinned. This was necessary when we were installing each Certbot component in separate parts and only combining them later to ensure that the Augeas fork (which uses cffi) was used instead of the unmodified pinned version of Augeas.
Since everything is installed in one part and we're removing the Augeas pinning now though, this line is no longer necessary. You can see the snap being built and tested successfully with this change at https://travis-ci.com/github/certbot/certbot/builds/172134518.
* Do not unstage generated cffi augeas
* Add comment about deleted line.
According to `distutils/version.py`, StrictVersion is pretty strict in
what version numbers to accept:
> A version number consists of two or three dot-separated numeric
> components, with an optional "pre-release" tag on the end. The
> pre-release tag consists of the letter 'a' or 'b' followed by a number.
This assumption already fails for some pretty basic python libraries
itself, like setuptools, also available in `46.1.3.post20200610`, a
completely valid version number according to
https://www.python.org/dev/peps/pep-0440/#post-releases.
There doesn't seem to be a particular reason on why StrictVersion has
been used here, so let's use LooseVersion, to be compatible with these
versions.
Co-authored-by: Adrien Ferrand <adferrand@users.noreply.github.com>
* Base logic
* Various controls when email is None
* Adapt eff tests
* Forward compatibility
* Also for csr
* Explicit regr or meta updates in account objects
* Adapt logic to ask for eff subscription during registering
* Adapt tests
* Move dry-run control
* Add some relevant controls on handle_subscription call checks
This will allow DNS plugin snaps to build if they rely on unreleased acme/certbot, and remove other copy of Certbot from externally snapped plugins. Fixes#8064 and fixes#7946. Implementation is based on the design [here](https://github.com/certbot/certbot/issues/8064#issuecomment-645513120).
To test, see reverted commit 8632064. Steps taken:
- added changes to setup.py and snapcraft.yaml
- successfully snapped, connected, ran `sudo certbot plugins --prepare`
- added temporary changes to have both certbot and certbot-dns-dnsimple use DNSAuthenticator2
- snapped and installed certbot, `certbot plugins` failed as expected.
- snapped and installed certbot-dns-dnsimple, `sudo certbot plugins --prepare` succeeded
- Inspected dns plugin's `bin` and `lib`; no `certbot` or `acme`, as expected.
```
$ ls /snap/certbot-dns-dnsimple/current/lib/python3.6/site-packages/
OpenSSL future-0.18.2.dist-info requests_file.py
PyYAML-5.3.1.dist-info idna setuptools
_cffi_backend.cpython-36m-x86_64-linux-gnu.so idna-2.9.dist-info setuptools-47.3.1.dist-info
certbot_dns_dnsimple lexicon six-1.15.0.dist-info
certbot_dns_dnsimple-1.6.0.dev0-py3.6.egg-info libfuturize six.py
certifi libpasteurize tldextract
certifi-2020.4.5.1.dist-info past tldextract-2.2.2.dist-info
cffi pip urllib3
cffi-1.14.0.dist-info pip-20.1.1.dist-info urllib3-1.25.9.dist-info
chardet pkg_resources wheel
chardet-3.0.4.dist-info pyOpenSSL-19.1.0.dist-info wheel-0.34.2.dist-info
cryptography pycparser yaml
cryptography-2.8.dist-info pycparser-2.20.dist-info zope
dns_lexicon-3.3.26.dist-info requests zope.interface-5.1.0-py3.6-nspkg.pth
easy_install.py requests-2.23.0.dist-info zope.interface-5.1.0.dist-info
future requests_file-1.5.1.dist-info
$ ls /snap/certbot-dns-dnsimple/current/bin/
chardetect futurize lexicon pasteurize tldextract
```
- reset to HEAD^
- snapped and installed certbot to not have the DNSAuthenticator2 changes, `certbot plugins` failed as expected.
* Don't include certbot deps when EXCLUDE_CERTBOT_DEPS is set
* Set EXCLUDE_CERTBOT_DEPS in certbot-dns-dnsimple/snap/snapcraft.yaml
fix branch name
grammar
remove readme link
Remove links to Robie's repo.
say you cant use docker
Commands not command
Update publishing permissions section.
Have Certbot trust plugins.
Do not run snapcraft with sudo.
Upgrade snap to be based on core20
This PR makes several changes to be built on top of the core20 base snap. Fixes#7854.
The main changes are to `snapcraft.yaml`. With Snapcraft 4.0/core20 base, the python plugin is a thin wrapper, basically creating a `venv` and installing the packages from the source. The trouble with this is that it runs pycache, creating caches that conflict from the different parts. So to solve that, we put everything in a single part. Other changes include:
- We use classic confinement, so we need to specify a bunch of python packages to `stage-packages`, as mentioned [here](https://forum.snapcraft.io/t/trouble-bundling-python-with-classic-confinement-in-core20-4-0-4/18234/2).
- The certbot executable now lives in `bin`, so specify running `certbot/bin`.
- Since `python-augeas` is now being pulled into the single part, remove the pinning from constraints so we can use the latest version directly from github.
- Precompile our `cryptography` and `cffi` wheels to be based on python3.8.
Separately, we had to upgrade the snapcraft docker image to be based on focal, due to the thin wrapper situation. This was accomplished [here](https://github.com/adferrand/snapcraft/pull/1).
* Get rid of a whole bunch of error message
* Remove some more overlaps
* don't use certbot from nginx and apache
* use python3 from bin
* certbot needs to be in bin
* try to exclude just the certbot folder
* try a couple things to use the python from the venv bin
* play around with which versions of things we want from each package
* ok, certbot-nginx does need to stage bin
* certbot needs to not stage bin. why does certbot not put certbot in bin?
* fail to inspect more versions of things in the container shell
* take cffi backend from python-augeas
* if we use certbot from bin things should work?
* why is bin not in path? no idea, but let's get it compiled then inspect things in the snap shell
* use snap.certbot instead of bin/certbot
* it does require bin/certbot. I don't know why.
* let's see if we can stick it all in one step
* try installing local subdirectories
* move python-augeas into the single part
* remove after
* put back python-augeas part for now; ERROR: Could not satisfy constraints for 'python-augeas': installation from path or url cannot be constrained to a version
* how was this previously working without git installed? install git.
* maybe it needs to already have python3-wheel installed
* maybe wheel will install first if I change it to -e
* no -e
* maybe try a different python3 package to stage
* this last change wasn't necessary
* remove the bin/ from renew
* nope, it does need bin/certbot
* back to wget
* stage a bare python3
* add all necessary python packages to stage-packages
* pretty sure we don't actually need wheel. let's try removing it!
* remove python-augeas, since we have it pinned to an older version in cb-auto that might work
* stage augeas
* still need libaugeas-dev
* ok let's try building
* combining into one part works! just make sure to unpin python-augeas when generating snap-constraints.txt
* change our scripts to unpin python-augeas
* Use ubuntu 20 in compile_native_wheels.sh
* .travis.yml should use python3-dev instead of python-dev
* jk! we don't need python3-dev in travis
* Update cffi and cryptography wheels for ubuntu20 version of python
* looks like we need python3-dev to build things
* Remove deprecated i386 wheels
I initially added this when the script was doing things like migrating all LXD containers to the snap. I think the external side effects are now pretty minimal thought so I think we can remove the need for this environment variable which makes it easier to use outside of CI for manual testing.
* Tweaks for improved Cloudflare API
* Update docs for dns-cloudflare
* Update tests and changelog
* Fix bad merge
* Fix error code for record add
* Improve error message
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
This PR proposes a way to build the certbot snap for several architectures, using a QEMU-base emulation approach, and several optimizations to keep the build time of each snap below 20 minutes.
Most of the reasoning for the approach proposed here is described in the original PR: https://github.com/basak/certbot-snap-build/pull/27
On top of it, I added a docker pull to a pre-compiled snapcraft docker, instead of compiling it during the Travis pipeline, in order to save 5 to 7 minutes more on each snap build. The snap images are compiled and stored here: https://hub.docker.com/repository/docker/adferrand/snapcraft. Depending on the time the PR will be reviewed, we can:
* continue to use `adferrand/snapcraft`
* move its logic to certbot scope and use something like `certbot/snapcraft`
* wait for https://github.com/snapcore/snapcraft/pull/3144 to be merged, and use `snapcore/snapcraft`.
* Backport https://github.com/basak/certbot-snap-build/pull/27 into Certbot project
* Fix build deps
* Integrate proactively #8012 to fix builds on non-amd64 archs
* Configure jobs on Travis
* Focus on snap builds. Disable temporarily some jobs. Disable deploy actions by security.
* Specify TARGET_ARCH during snap build
* Do not do anything if TOXENV is not set
* Various optimizations
* Use recent version of ubuntu for get correct features on snap out of the box
* Add up to date wheels
* Organizing scripts
* Set dest dir
* Get back original configuration for Travis
* Add comments
* Update common_libs.sh
* Use adferrand/snapcraft
* Test build
* Stable snapcraft
* Update build_and_install.sh
* Move back snap builds to the cron/release pipeline
* Update snap/local/compile_native_wheels.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update snap/local/compile_native_wheels.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update snap/local/compile_native_wheels.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update snap/local/build_and_install.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Enable i386 builds, various optimizations
* Update dependencies
* Configure a simple http server to serve the pre compiled wheels
* Fix wheels compilation
* Relax permissions
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Add snap plugin support
Switch to a PoC branch of certbot that supports the new
CERTBOT_PLUGIN_PATH and wrap the snap to set this variable correctly
based on the content interfaces connected.
(cherry picked from commit 7076a55fd82116d068e2aca7239209b7203917d2)
* Modify certbot.wrapper to append to PYTHONPATH instead of separate CERTBOT_PLUGIN_PATH variable
* Update certbot-wrapper to python3.6 version
* add source field
* Update changelog
* Use bash instead of sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Exit if something goes wrong
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* No leading : when modifying empty PYTHONPATH
* Improve bash handling of PYTHONPATH manipulation
Co-authored-by: Robie Basak <robie.basak@canonical.com>
Co-authored-by: Brad Warren <bmw@eff.org>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
This PR gets its root from an observation I did on current version of Certbot (1.3.0): the `renewal-hooks` directory in Certbot configuration directory is created on Windows with write permissions to everybody.
I thought it was a critical bug since this directory contains hooks that are executed by Certbot, and you certainly do not want this folder to be open to any malicious hook that could be inserted by everyone, then executed with administrator privileges by Certbot.
Turns out for this specific problem that the bug is not critical for the hooks, because the scripts are expected to be in subdirectories of `renewal-hooks` (namely `pre`, `post` and `deploy`), and these subdirectories have proper permissions because we set them explicitly when Certbot is starting.
Still, there is a divergence here between Linux and Windows: on Linux all Certbot directories without explicit permissions have at maximum `0o755` permissions by default, while on Windows it is a `0o777` equivalent. It is not an immediate security risk, but it is definitly error-prone, not expected, and so a potential breach in the future if we forget about it.
Root cause is that umask is not existing in Windows. Indeed under Linux the umask defines the default permissions when you create a file or a directory. Python takes that into account, with an API for `os.open` and `os.mkdir` that expose a `mode` parameter with default value of `0o777`. In practice it is never `0o777` (either you the the `mode` explictly or left the default one) because the effective mode is masked by the current umask value in the system: on Linux it is `0o022`, so files/directories have a maximum mode of `0o755` if you did not set the umask explicitly, and it is what it is observed for Certbot.
However on Windows, the `mode` value passed (and got from default) to the `open` and `mkdir` of `certbot.compat.filesystem` module is taken verbatim, since umask does not exit, and then is used to calculate the DACL of the newly created file/directory. So if the mode is not set explicitly, we end up with files and directories with `0o777` permissions.
This PR fixes this problem by implementing a umask behavior in the `certbot.compat.filesystem` module, that will be applied to any file or directory created by Certbot since we forbid to use the `os` module directly.
The implementation is quite straight-forward. For Linux the behavior is not changed. On Windows a `mask` parameter is added to the function that calculates the DACL, to be invoked appropriately when file or directory are created. The actual value of the mask is taken from an internal class of the `filesystem` module: its default value is `0o755` to match default umasks on Linux, and can be changed with the new method `umask` that have the same behavior than the original `os.umask`. Of course `os.umask` becomes a forbidden function and `filesystem.umask` must be used instead.
Existing code that is impacted have been updated, and new unit tests are created for this new function.
* Implement umask for Windows
* Set umask at the beginning of tests
* Fix lint, update local oldest requirements
* Update certbot-apache/setup.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Improve tests
* Adapt filesystem.makedirs for Windows
* Fix
* Update certbot-apache/setup.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Changelog entries
* Fix lint
* Update certbot/CHANGELOG.md
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Issue #1123 discusses a feature that allows users to set the cipher
security level. That feature wasn't built. It didn't provide enough
user value to justify the corresponding increase in complexity. The
feature request and the associated discussion threads were closed.
However, the proposed API spec and the TODO section remained in the
cipher docs. They're a vestige of that issue from olden days and this PR
removes those last living traces...
Fixes#8027.
* Add support for NetBSD by telling certbot-nginx where the nginx
configuration directory is.
* Update the CHANGELOG.
* Pass the right type of sequence to "in". Thanks lint.
* Adjust the CHANGELOG.md entry following feedback from ohemorange.
Co-authored-by: Lloyd Parkes <lloyd@must-have-coffee.gen.nz>
In #7771, the Apache configurator gained the ability to identify what
version of OpenSSL Apache's ssl_module is linked against. However, the
detection was only functional if the module was built as a DSO (which is
almost always the case).
This commit covers the case where the ssl_module is statically linked
within the Apache binary. It requires the user to specify the path to
the binary (with --apache-bin) and emits a warning if static linking is
detected but no path has been provided.
This PR upgrades Certbot pinned dependencies through `letsencrypt-auto-source/rebuild_dependencies.py` while taking into account the problems detected in https://github.com/certbot/certbot/pull/8035:
* `cryptography` is pinned to `2.8` to continue to support OpenSSL 1.0.1 on non-x86 ancient Linux distributions (RHEL 6 + Debian 8)
* `parsedatetime` is pinned to `2.5` because of an incompatibility with Python 2.7 (see https://github.com/bear/parsedatetime/issues/246)
* `letsencrypt-auto-source/rebuild_dependencies.py` now takes into account the environment markers that are aded to `AUTHORITATIVE_CONSTRAINTS`: this is used for the `enum34` dependency, to not install it on Python 3.6+ and not break the distribution by swapping the built-in `enum` module during the setup of Certbot venv.
Fixes#8030
* Pin cryptography and parsedatetime
* Upgrade dependencies
* Remove authoritative constraint
* Upgrade dependencies
* Rebuild certbot-auto
* Update letsencrypt-auto-source/rebuild_dependencies.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Honor specific requirements in the AUTHORITATIVE_CONSTRAINTS
* Fix injection
* Update dependencies
* Update rebuild_dependencies.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#7988. As described there, the steps involved are:
1. Update our tests so they fail due to this problem.
2. Update the keys used in the tests so they pass with the new changes.
For 1, see a [failing travis run](https://travis-ci.com/github/certbot/certbot/jobs/340710511) with the included change. And for the full output to confirm that this is what is failing, see a [run on debian 10](https://github.com/certbot/certbot/files/4692350/debian_run_log.txt).
This PR adds `rsa4096_key.pem` and `rsa4096_cert.pem`, updates the `TLS-ALPN` test to use those keys in place of the 1024-bit versions, and fixes the README in that `testdata` folder with correct instructions to generate these files.
* export PIP_NO_BINARY in pip install subshell in test_sdists.sh
* set environment variable on the line that installs most packages
* Generate 4096-bit rsa key and cert, and fix README instructions to do so.
* Update TLS_ALPN test to use 4096-bit key instead of 1024-bit key.
* Update changelog
* Older versions of Python have an error when both VIRTUAL_NO_DOWNLOAD and PIP_NO_BINARY are set, so only apply the latter at the install phase.
* Add enum34 constraint manually, since rebuild_dependencies.py seems to be broken.
* only delete key if it exists
* Check OpenSSL version before trying to set PIP_NO_BINARY
* Add comment explaining why we only set PIP_NO_BINARY at the install step
* Add the content interface to Certbot
This commit contains a subset of the changes from 7076a55fd82116d068e2aca7239209b7203917d2.
* Normalise slot parameters
(cherry picked from commit 810941979bcf609c1e0be18e9263abf046b90e82)
Co-authored-by: Robie Basak <robie.basak@canonical.com>
Fixes#7667.
Implements the plan described in #7667.
Here's a terminal log showing that it does so:
```
# sudo snap connect certbot:plugin certbot-dns-dnsimple
error: cannot perform the following tasks:
- Run hook prepare-plug-plugin of snap "certbot" (run hook "prepare-plug-plugin":
-----
Only connect this interface if you trust the plugin author to have root on the system
Run `snap set certbot trust-plugin-with-root=ok` to acknowledge this and then run this command again to perform the connection
-----)
# snap set certbot trust-plugin-with-root=ok
# sudo snap connect certbot:plugin certbot-dns-dnsimple
# sudo snap disconnect certbot:plugin certbot-dns-dnsimple:certbot
# sudo snap connect certbot:plugin certbot-dns-dnsimple
error: cannot perform the following tasks:
- Run hook prepare-plug-plugin of snap "certbot" (run hook "prepare-plug-plugin":
-----
Only connect this interface if you trust the plugin author to have root on the system
Run `snap set certbot trust-plugin-with-root=ok` to acknowledge this and then run this command again to perform the connection
-----)
```
* Add plugin connection hook to accept root trust
* snapctl requires a configure hook to set options
* Add sh notice
* Update changelog
Fixes#7993
This PR uses `os.umask()` during `certbot.compat.filesystem.makedirs()` call to ensure that all directories, and not only the leaf one, have the provided `mode` when created. This ensures a safe and consistent behavior independently from the Python version, since the behavior of `os.makedirs` changed on that matter with Python 3.7.
* Implement logic to apply the same permission on all dirs created by makedirs
* Add a test
* Add comment
* Update certbot/certbot/compat/filesystem.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Error out in apache installer when mod_ssl is not available
* Update to MisconfigurationError and add/fix tests
* Remove error cases we no longer hit and associated test
* mock out function to have consistent error across machines
* improve changelog message
* only check key in modules list, not value
The error message from `python3 -m venv` when you don't have `python3-venv` installed is pretty good, but lets skip the failure and make sure it is installed the first time.
Related to #7649 since @joohoi needs a method to copy owner and permissions together from a source file to a destination file.
This PR creates the method `copy_ownership_and_mode()` in `certbot.compat.filesystem` module to achieve this goal. Its behavior is consistent across Linux and Windows in respect to the security model that have been defined for Certbot on Windows.
The method behaves globally the same than `copy_ownership_and_apply_mode`, but this time the permissions are extracted from the source file. For Windows it means that the DACL is copied from the source to the destination with the same content.
* Create copy_ownership_and_mode to copy both owner and mode from src to dst
* Update certbot/tests/compat/filesystem_test.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Fix docstring
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
So, setuptools broke the installation setup, by removing a deprecated API that is still used by some of our dependencies (see pypa/setuptools#2017)
This PR fixes the Docker build by using pipstrap to pin pip/setuptools/wheels, like it is done in several critical places (certbot-auto, ...).
An issue in certbot is opened to fix more generally the problem in most recent versions of setuptools: certbot/certbot#7976
It rebuilt locally all dockers (certbot + dns plugins) for the three architectures, and all have passed.
`tox -e cover` fails for me on macOS. This is due to the differences in the code that is run when on Linux vs. other platforms in `certbot.util` and its tests. Diffing my local coverage with Travis, the only difference in lines missing test coverage is:
```
$ diff travis.txt local.txt
< certbot/certbot/util.py 238 24 90% 132-134, 210, 275-281, 318, 330-340, 347, 381-382
> certbot/certbot/util.py 239 34 86% 30, 132-134, 210, 275-281, 301, 305, 316-318, 330-340, 347, 367-373, 381-382
< certbot/tests/util_test.py 392 0 100%
> certbot/tests/util_test.py 375 26 93% 483-487, 492-502, 507-514, 548-550, 555-557
```
I think tests on `master` should not be failing locally for people.
While there would be other ways to fix this by adding `# pragma: no cover` lines or writing mocked out tests for other platforms, I personally just think dropping the minimum coverage one percentage point is fine at least for now.
Coming out of the conversation at #7863 in the linked Google Doc, we should always have at least 1 release between updating one of our plugins to stop using a deprecated acme/certbot API and removing it from acme/certbot. Doing this gives the plugin changes time to propagate rather than potentially having the plugin break because Certbot was updated before the plugin had made the necessary changes.
This comment here should help ensure this.
* Add pytest warnings warning.
* clarify comment
Fixes#7713.
As discussed in #7713, providing a Powershell script as hook for Certbot is not working currently. This is because hooks are run in a `cmd` environment, that recognizes only `.bat` files as valid scripts that can be run from their bare name on command line.
On the other hand, the Powershell both `.bat` and `.ps1` scripts as valid scripts.
This PR makes hooks command be executed by Powershell, instead of `cmd` as `Popen` does by default when `shell=true` is used. It also modifies the tests to handle this new environment, in particular in term of encoding (UTF-16-LE is the default one in Powershell).
* Run hooks in powershell on Windows
* Fix hook test
* Fallback to unittest.mock
* In fact, shell_cmd as a list of str could not work. Declare only str as acceptable input for shell_cmd.
* Added changelog
Despite this PR (only) being ~200 lines containing mostly code copied from another repo, there is a lot going on here. For the sake of making it both easier to review and to remember some of these things in the future by referring back to this PR, I've documented a lot of noteworthy things with section headers below. With that said, it's probably not necessary to read each section unless you're interested in that topic.
The most noteworthy thing for the reviewer is **this PR should be merged and not squashed** to preserve authorship. To merge this code, once we're happy with this PR, I'll probably open a new PR squashing any commits I make in response in review comments back into a single commit to try to keep history somewhat clean. To help prevent this PR from being accidentally squashed, I'm making this a draft PR for now.
### Git history of https://github.com/basak/certbot-snap-build
I think it is worth preserving the git history of https://github.com/basak/certbot-snap-build that this PR is based on in this repo to help us track why things were done a certain way. To do this while keeping our git history somewhat clean, I took the approach described at https://stackoverflow.com/questions/1425892/how-do-you-merge-two-git-repositories/21495718#21495718 to move all history of https://github.com/basak/certbot-snap-build into a `snap` directory. I then squashed all commits so that sequential commits from the same author are one commit. I probably could have reordered commits to try and squash things a little more, but I personally don't think it's worth the trouble. Finally, I merged this rewritten history into this branch of the Certbot repo.
The contents of the `snap` directory are identical to the current contents of https://github.com/basak/certbot-snap-build before my final commit in this PR which makes the changes to make things work in this repo.
### Travis stages
This is described in general at https://docs.travis-ci.com/user/build-stages/, but I don't think we should deploy the snap if any of our tests are failing. To accomplish this, I created a "Snap" stage that builds, tests, and deploys the snap which is only executed after a "Test" stage that contains all of our other tests. The "Snap" stage will not run until the "Test" stage completes successfully.
### snap/local
This directory is ignored by `snapcraft` which I think makes it a good place to store `snap` specific scripts like `build_and_install.sh`.
See https://bugs.launchpad.net/snapcraft/+bug/1792203 for more info.
### Why remove certbot-compatibility-test from apacheconftest toxenvs?
Because it's not used. In theory, it could go in its own PR, but it'll create merge conflicts with this one so I'd personally prefer to include this simple change in this PR as well.
### Checklist for landing this PR
- [x] Squash all of my commits into one commit
- [x] Update the release instructions to have to move the snap to the beta channel
- [x] Shut down Robie's nightly builds probably by updating his repo to say that the code has moved here and deleting everything
* merge .gitignore
* Move snapcraft.yml up one level.
* update source
* move test.sh to tox.ini
* use new tox.ini in .travis.yml
* move snap build code
* make script executable
* remove unused python3-dev
* don't use deprecated classic flag
* go back to stable channel
* add nginx in snap addons
* add deploy steps
* Add comments explaining external tox envs.
* error if not in CI
* don't use --depth
* remove old .travis.yml
* Add big comment about SNAP_TOKEN.
* Set all_branches: true.
* Add repo setting.
* run travis on tags
* Add more documenting comments to .travis.yml.
* snap: move snapcraft.yaml to snap directory
Signed-off-by: Sergio Schvezov <sergio.schvezov@canonical.com>
* snap: use a local plugin to get around the delivered plugin
Add a plugin to the project which behaves as expected until a version
of snapcraft satisfies the project needs.
Additional snapcraft.yaml changes were made to accommodate for the snap
to build.
Signed-off-by: Sergio Schvezov <sergio.schvezov@canonical.com>
* snap: compile pycache in the last step for the last part
Signed-off-by: Sergio Schvezov <sergio.schvezov@canonical.com>
* Remove legacy Store upload credentials
These have not been needed since 5d7969a.
* Work around dev part dependency failure case
Get pip to install certbot from its VCS repository directly during the
build of the nginx and apache plugin parts.
This works around issue #12 when it affects the interaction between the
apache or nginx plugin and certbot itself.
It does not work around the case where the same problem occurs in the
interaction between certbot and acme. This looks harder to work around
because pip's VCS URL handling doesn't appear to include a facility to
install from a subdirectory of a git repository and this is where the
acme source is located.
* Switch to using lxd for the snapcraft run
The Docker image is 16.04 only. Before we can switch to 18.04, we need
to remove Docker, which means using the snapcraft snap using Travis' new
snap support, and then using the lxd functionality in snapcraft so that
it is enabled to build in the appropriate environment.
* Switch build to use core 18
Fixes: #14
* Move constraints into a list
This seems to be a requirement either either newer snapcraft, snapcraft
in lxd or in the move to core18 (it isn't clear to me which). This fixes
the error:
Failed to load plugin: properties failed to load for certbot: The 'constraints' property does not match the required schema: '$SNAPCRAFT_PART_SRC/constraints.txt' is not of type 'array'
* version-script -> snapcraftctl set-version
The use of version-script seems to break with either newer snapcraft,
snapcraft with lxd or core18 (it's not clear to me which). The breakage
is related to "parts/certbot/src" not being found. This can be fixed
with $SNAPCRAFT_PART_SRC, but this doesn't seem to be defined at
"version-script time".
However https://snapcraft.io/docs/deprecation-notices/dn10 deprecates
the use of version-script, and if we convert to the recommended new way,
then we use override-pull instead and $SNAPCRAFT_PART_SRC is defined
there, so this conveniently fixes both problems at once.
* Do not explicitly install snapd
Since this is now handled by the Travis addon, we do not need to do it
explicitly.
* Add Travis notifications
* Adjust automatic snap deployment configuration
Travis now has a documented[1] "snap" provider and the previous
experimental mechanism seems to have stopped working, presumably because
it was deprecated in favour of this new mechanism.
[1] https://docs.travis-ci.com/user/deployment/snaps/
* Configure for python3
* Update tests
* Use appropriate virtualenv
* Install nginx for the integration tests
* Try use LD_LIBRARY_PATH to find augeas shared library in snap when python-augeas is invoked
* Update travis to use build-in setup capabilities
* Update .travis.yml
* Add acme build
* Update tests
* Try more recent dist
* Update command
* Clean tests
* Add back augeas
* Add env
* Revert to last working snapcraft config
* Add a gitignore
* Reintegrate acme. Declare augeas in certbot parts
* Use release version of certbot
* Try new approach
* Fix config
* Directly install version of python-augeas from pypi
* Restart from basic
* Clone only once certbot repository. Use pinned versions of dependencies from certbot-auto.
* Try relatively to source
* Use snapcraft env variables
* Strip hashes
* Fix path
* Redefine path
* Continue to prepare the runtime
* Fix command line
* Update .travis.yml
* Add back certbot-apache
* Update snapcraft.yaml
* Build snap against the latest release of certbot
* Add renewal timer
* Install libaugeas0 in python-augeas part build
This part needs libaugeas0 to build.
* Bump to 0.26.1
* Always act directly on upstream master
I want to keep this always working, so move to master. We can
reintroduce upstream stable releases when we are ready for general use.
Closes: #5
That particular issue seems to no longer happen. Presumably something
changed in upstream git or in PyPI. If it happens again, hopefully I'll
have CI against upstream master up by then and I'll be able to pin it
down.
* Add empty Travis build
* Add Travis automatic snap edge publication
* Add integration test
This uses upstream's test suite from their source tree to check the
built snap to make sure it behaves as expected, before attempting upload
to the store.
* Point Augeas to its lens library
Augeas defaults to looking in /usr/share/augeas/lenses, which in a snap
isn't found at this path, but inside $SNAP. So set AUGEAS_LENS_LIB to
where the lenses can be found within the snap.
This fixes the Apache plugin that uses Augeas.
In #7925 we accidently changed the logic here. Before it was:
type = cron OR (type = push AND branch NOT IN (apache-parser-v2, master))
now it's
type = cron OR (type = push AND branch = master)
We want to be able to run our full test suite on things like test-* branches. The reason we had been excluding master is it has the full test suite run on it (through what Travis calls cron) nightly and not running on every commit helps prevent us on waiting on CI since our nightly tests spin up so many jobs.
This PR changes things back to the intended behavior.
(We could talk about changing the condition to just type = cron OR type = push if we want, but I'd rather do that in a separate PR once things like test- branches are fixed.)
Fixes#7268
I removed the reference to automatically selecting which ACME protocol we use, since at some point we'll want to rip out the non-spec-compliant ACMEv1 code.
Python 2 is going to get harder and harder to install locally so I don't think we should assume/require devs to have it installed.
This PR builds on #7905 so our developer guide only has people use Python 3.
Part of #7886.
This PR conditionally installs `mock` in `certbot-dns-*/setup.py` based on setuptools version and python version, when possible. It then updates the tests to use `unittest.mock` when `mock` isn't available.
* Do not require mock in Python 3 in certbot-dns modules
* update changelog
* error when trying to build wheels with old setuptools
* add type: ignores
Part of #7886.
This PR conditionally installs mock in `apache/setup.py` based on setuptools version and python version, when possible. It then updates `apache` tests to use `unittest.mock` when `mock` isn't available.
* Conditionally install mock in apache
* error out on newer python and older setuptools
* error when trying to build wheels with old setuptools
* use unittest.mock when third-party mock isn't available in apache, with no cover and type ignore
This PR is exactly the same as #7895, but know we know a little bit more about what was going on with `mypy`.
Part of #7886.
This PR conditionally installs mock in `certbot/setup.py` based on setuptools version and python version, when possible. It then updates `certbot` tests to use `unittest.mock` when `mock` isn't available.
* Conditionally install mock in certbot
* use unittest.mock when third-party mock isn't available in certbot
* Add type:ignores because of https://github.com/python/mypy/issues/1153
* error out on newer python and older setuptools
* error when trying to build wheels with old setuptools
Part of #7886.
This PR conditionally installs mock in `acme/setup.py` based on setuptools version and python version, when possible. It then updates `acme` tests to use `unittest.mock` when `mock` isn't available.
Now with `type: ignore` as appropriate. Once the "future steps" of #7886 are finished, and mypy is on Python 3, the `pragma no cover`s and `type ignore`s will be gone.
* Conditionally install mock in acme
* error out on newer python and older setuptools
* error when trying to build wheels with old setuptools
* use unittest.mock when third-party mock isn't available in acme, with no cover and type ignore
This PR fixes the Travis failures that can be seen https://travis-ci.com/certbot/certbot/builds/160258644. Running the tests locally, it looks like Ubuntu has started shutting down the 19.04 repos which makes sense as this release has been EOL'd. See https://wiki.ubuntu.com/Releases.
I have the full suite including the test farm tests running at https://travis-ci.com/github/certbot/certbot/builds/160269969 with this change.
The issue of adding 19.10 to our test farm tests is tracked by #7851. I think that issue is important and it's in our current milestone, but I'd personally rather get our tests passing for now and try to expand them to run on other systems later.
* Revert "Do not require mock in Python 3 in certbot module (#7895)"
This reverts commit 77871ba71c.
* Revert "Do not require mock in Python 3 in acme module (#7894)"
This reverts commit cd0acf5dcc.
Part of #7886.
This PR conditionally installs mock in `certbot/setup.py` based on setuptools version and python version, when possible. It then updates `certbot` tests to use `unittest.mock` when `mock` isn't available.
* Conditionally install mock in certbot
* use unittest.mock when third-party mock isn't available in certbot
* Add type:ignores because of https://github.com/python/mypy/issues/1153
* error when trying to build wheels with old setuptools
Part of #7886.
This PR conditionally installs mock in acme/setup.py based on setuptools version and python version, when possible. It then updates acme tests to use unittest.mock when mock isn't available.
* Conditionally install mock in acme
* use unittest.mock when third-party mock isn't available in acme
* error when trying to build wheels with old setuptools
* Fix dangerous default argument
* Remove unused imports
* Remove unnecessary comprehension
* Use literal syntax to create data structure
* Use literal syntax instead of function calls to create data structure
Co-authored-by: deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com>
Fixes#7857.
* stop using urllib2 in test farm tests
* use six for urllib instead
* remove fabric lcd usage
* correct lcd removal
* remove fabric cd
* convert some remote calls to v2
* move more cxns to v2
* get run working with prefix
* get sudo commands working
* remove final fabric v1 references including local
* update requirements and README
* add new venv to gitignore
* update version used in travis
* remove deploy_script unused kwargs
* fix killboulder implementation so I can test creating a new boulder server
* hardcode the gopath due to broken env manamagement in fabric2
* Update letstest readme
* move the comment about hardcoding the ggopath
* catch BaseException instead of Exception
* work around fabric #2007
* use connections as context managers to ensure they're closed
* remove reference to virtualenv
Translate a proxy specified by an environment variable ("http_proxy"
or "HTTP_PROXY") into options recognized by "openssl ocsp". Support
is limited to HTTP proxies which don't require authentication.
Fixes#6150
Fixes#7875 .
After [this comment](https://github.com/certbot/certbot/issues/7875#issuecomment-608145208) and evaluating the options, I opted to go with `stricttextualmsg`, as required by RFC 8555. Reasoning is that the ACME v1 code path (via OpenSSL) produces a `fullchain_pem` which satisfies `stricttextualmsg`, so we don't need to be more generous than that.
One downside of the `re` approach is that it doesn't seem capable of capturing repeating group matches. As a result, it matches each certificate individually, silently passing over any data in between the encapsulation boundaries, such as explanatory text, which is prohibited by RFC 8555.
It would be ideal to raise an error when encountering such a non-conformant chain, but we'd need to create a mini-parser to do it, I think.
* Fix fullchain parsing for CRLF chains.
fullchain parsing now works in two passes:
1. A first pass which is generous with what it accepts - basically
preeb(CERTIFICATE)+anything+posteb(CERTIFICATE). This determines
the boundaries for each certificate.
2. A second pass which normalizes (by parsing and re-encoding) each
certificate found in the first pass.
* typo in docstring
* remove redundant group in regex
* can't use assertRaisesRegex until py27 is gone
* acme: socket timeout for HTTP standalone servers
Adds a default 30 second timeout to the StreamRequestHandler for clients
connecting to standalone HTTP-01 servers. This should prevent most cases
of an idle client connection from preventing the standalone server from
shutting down.
Fixes#7386
* use idiomatic kwargs default value
* move HTTP01Server lower to fix mypy forward ref.
* fix test crash on macOS due to socket double-close
* maybe its not an OSError?
* disable coverage check on useless branch
Fixes#7594.
Removes the code asking interactively if the user would like to add a redirect.
* Remove interactive redirect ask
* display.enhancements is no longer used, so remove it.
* update changelog
* remove references to removed display.enhancements
* add redirect_default flag to enhance_config to conditionally set default for redirect value
* Update default in help text.
* Load apacheconfig dependency, gate behind flag
* Bump apacheconfig dependency to latest version and install dev version of apache for coverage tests
* Move augeasnode_test tests to more generic parsernode_test
* Revert "Move augeasnode_test tests to more generic parsernode_test"
This reverts commit 6bb986ef786b9d68bb72776bde66e6572cf505a9.
* Mock AugeasNode into DualNode's place, and run augeasnode tests exclusively on AugeasNode
* Don't calculate coverage for skeleton functions
* clean up helper function in augeasnode_test
Fixes#7350.
This PR changes the parsed modules from a `set` to a `dict`, with the filepath argument as the value. Accordingly, after calling `enable_mod` to enable `ssl_module`, modules now need to be re-parsed, so call `reset_modules`.
* Add mechanism for selecting apache config file, based on work done in #7191.
* Check OpenSSL version
* Remove os imports
* debian override still needs os
* Reformat remaining apache tests with modules dict syntax
* Clean up more apache tests
* Switch from property to method for openssl and add tests for coverage.
* Sometimes the dict location will be None in which case we should in fact return None
* warn thoroughly and consistently in openssl_version function
* update tests for new warnings
* read file as bytes, and factor out the open for testing
* normalize ssl_module_location path to account for being relative to server root
* Use byte literals in a python 2 and 3 compatible way
* string does need to be a literal
* patch builtins open
* add debug, remove space
* Add test to check if OpenSSL detection is working on different systems
* fix relative test location for cwd
* put </IfModule> on its own line in test case
* Revert test file to status in master.
* Call augeas load before reparsing modules to pick up the changes
* fix grep, tail, and mod_ssl location on centos
* strip the trailing whitespace from fedora
* just use LooseVersion in test
* call apache2ctl on debian systems
* Use sudo for apache2ctl command
* add check to make sure we're getting a version
* Add boolean so we don't warn on debian/ubuntu before trying to enable mod_ssl
* Reduce warnings while testing by setting mock _openssl_version.
* Make sure we're not throwing away any unwritten changes to the config
* test last warning case for coverage
* text changes for clarity
This PR builds on #7657 and cleans up additional unnecessary pylint comments and some stray comments referring to pylint: disable comments that have been deleted that I didn't notice in my review of that PR.
* Remove stray pylint link.
* Cleanup more pylint comments
* Cleanup magic_typing imports
* Remove unneeded pylint: enable comments
This PR is an alternative to #7125.
Instead of disabling the strict mode on Pebble, this PR fixes the JWS payloads regarding RFC 8555 to be compliant, and allow certbot to work with Pebble v2.1.0+.
* Fix acme compliance to RFC 8555.
* Working mixin
* Activate back pebble strict mode
* Use mixin for type
* Update dependencies
* Fix also in fields_to_partial_json
* Update pebble
* Add changelog
This PR is the first part of work described in #6724.
It reintroduces the tls-alpn-01 challenge in `acme` module, that was introduced by #5894 and reverted by #6100. The reason it was removed in the past is because some tests showed that with `1.0.2` branch of OpenSSL, the self-signed certificate containing the authorization key is sent to the requester even if the ALPN protocol `acme-tls/1` was not declared as supported by the requester during the TLS handshake.
However recent discussions lead to the conclusion that this behavior was not a security issue, because first it is coherent with the behavior with servers that do not support ALPN at all, and second it cannot make a tls-alpn-01 challenge be validated in this kind of corner case.
On top of the original modifications given by #5894, I merged the code to be up-to-date with our `master`, and fixed tests to match recent evolution about not displaying the `keyAuthorization` in the deserialized JSON form of an ACME challenge.
I also move the logic to verify if ALPN is available on the current system, and so that the tls-alpn-01 challenge can be used, to a dedicated static function `is_available` in `acme.challenge.TLSALPN01`. This function is used in the related tests to skip them, and will be used in the future from Certbot plugins to trigger or not the logic related to tls-alpn-01, depending on the OpenSSL version available to Python.
* Reimplement TLS-ALPN-01 challenge and standalone TLS-ALPN server from #5894.
* Setup a class method to check if tls-alpn-01 is supported.
* Add potential missing parameter in validation for tls-alpn
* Improve comments
* Make a class private
* Handle old versions of openssl that do not terminate the handshake when they should do.
* Add changelog
* Explicitly close the TLS connection by the book.
* Remove unused exception
* Fix lint
Fixes#7835
I had to mock out `get_serial_from_cert` to keep a test from failing, because `cert_path` was mocked itself in `test_report_human_readable`.
Also, I kept the same style for the serial number as the recent Let's Encrypt e-mail: lowercase hexadecimal without a `0x` prefix and without colons every 2 chars. Shouldn't be a problem to change the format if required.
Fixes#5484
This PRs makes Certbot expose two new environment variables in the auth and cleanup hooks of the `manual` plugin:
* `CERTBOT_REMAINING_CHALLENGES` contains the number of challenges that remain after the current one (so it equals to 0 when the script is called for the last challenge)
* `CERTBOT_ALL_DOMAINS` contains a comma-separated list of all domains concerned by a challenge for the current certificate
With these variables, an hook script can know when it is run for the last time, and then trigger appropriate finalizers for all challenges that have been executed. This will be particularly useful for certificates with a lot of domains validated with DNS-01 challenges: instead of waiting on each hook execution to check that the relevant DNS TXT entry has been inserted, these waits can be avoided thanks to the latest hook verifying all domains in one run.
* Inject environment variables in manual scripts about remaining challenges
* Adapt tests
* Less variables and less lines
* Update manual.py
* Update manual_test.py
* Add documentation
* Add changelog
Fixes#1028.
Doing this now because of https://community.letsencrypt.org/t/revoking-certain-certificates-on-march-4/.
The new `ocsp_revoked_by_paths` function is taken from https://github.com/certbot/certbot/pull/7649 with the optional argument removed for now because it is unused.
This function was added in this PR because `storage.py` uses `self.latest_common_version()` to determine which certificate should be looked at for determining renewal status at 9f8e4507ad/certbot/certbot/_internal/storage.py (L939-L947)
I think this is unnecessary and you can just look at the currently linked certificate, but I don't think we should be changing the logic that code has always had now.
* Check OCSP status as part of determining to renew
* add integration tests
* add ocsp_revoked_by_paths
Certificates are public information by design: they are provided by
web servers without any prior authentication required. In a public
key cryptographic system, only the private key is secret information.
The private key file is already created as accessible only to the root
user with mode 0600, and these file permissions are set before any key
content is written to the file. There is no window within which an
attacker with access to the containing directory would be able to read
the private key content.
Older versions of Certbot (prior to 0.29.0) would create private key
files with mode 0644 and rely solely on the containing directory
permissions to restrict access. We therefore cannot (yet) set the
relevant default directory permissions to 0755, since it is possible
that a user could install Certbot, obtain a certificate, then
downgrade to a pre-0.29.0 version of Certbot, then obtain another
certificate. This chain of events would leave the second
certificate's private key file exposed.
As a compromise solution, document the fact that it is safe for the
common case of non-downgrading users to change the permissions of
/etc/letsencrypt/{live,archive} to 0755, and explain how to use chgrp
and chmod to make the private key file readable by a non-root service
user.
This provides guidance on the simplest way to solve the common problem
of making keys and certificates usable by services that run without
root privileges, with no requirement to create a custom (and hence
error-prone) executable hook.
Remove the existing custom executable hook example, so that the
documentation contains only the simplest and safest way to solve this
very common problem.
Signed-off-by: Michael Brown <mbrown@fensystems.co.uk>
After getting a +1 from everyone on the team, this PR removes the use of `codecov` from the Certbot repo because we keep having problems with it.
Two noteworthy things about this PR are:
1. I left the text at 4ea98d830b/.azure-pipelines/INSTALL.md (add-a-secret-variable-to-a-pipeline-like-codecov_token) because I think it's useful to document how to set up a secret variable in general.
2. I'm not sure what the text "Option -e makes sure we fail fast and don't submit to codecov." in `tox.cover.py` refers to but it seems incorrect since `-e` isn't accepted or used by the script so I just deleted the line.
As part of this, I said I'd open an issue to track setting up coveralls (which seems to be the only real alternative to codecov) which is at https://github.com/certbot/certbot/issues/7810.
With my change, failure output looks something like:
```
$ tox -e py27-cover
...
Name Stmts Miss Cover Missing
------------------------------------------------------------------------------------------
certbot/certbot/__init__.py 1 0 100%
certbot/certbot/_internal/__init__.py 0 0 100%
certbot/certbot/_internal/account.py 191 4 98% 62-63, 206, 337
...
certbot/tests/storage_test.py 530 0 100%
certbot/tests/util_test.py 374 29 92% 211-213, 480-484, 489-499, 504-511, 545-547, 552-554
------------------------------------------------------------------------------------------
TOTAL 14451 647 96%
Command '['/path/to/certbot/dir/.tox/py27-cover/bin/python', '-m', 'coverage', 'report', '--fail-under', '100', '--include', 'certbot/*', '--show-missing']' returned non-zero exit status 2
Test coverage on certbot did not meet threshold of 100%.
ERROR: InvocationError for command /Users/bmw/Development/certbot/certbot/.tox/py27-cover/bin/python tox.cover.py (exited with code 1)
_________________________________________________________________________________________________________________________________________________________ summary _________________________________________________________________________________________________________________________________________________________
ERROR: py27-cover: commands failed
```
I printed the exception just so we're not throwing away information.
I think it's also possible we fail for a reason other than the threshold not meeting the percentage, but I've personally never seen this, `coverage report` output is not being captured so hopefully that would inform devs if something else is going on, and saying something like "Test coverage probably did not..." seems like overkill to me personally.
* remove codecov
* remove unused variable group
* remove codecov.yml
* Improve tox.cover.py failure output.
Related to https://github.com/certbot/certbot/pull/7482, this removes some references to deprecated options in Certbot.
The only references I didn't remove were:
* In `certbot/tests/testdata/sample-renewal*` which contains a lot of old values and I think there's even some value in keeping them so we know if we make a change that suddenly causes old renewal configuration files to error.
* In the Apache and Nginx plugins and I created https://github.com/certbot/certbot/issues/7508 to resolve that issue.
If you run `mypy --platform darwin certbot/certbot/util.py` you'll get:
```
certbot/certbot/util.py:303: error: Name 'distro' is not defined
certbot/certbot/util.py:319: error: Name 'distro' is not defined
certbot/certbot/util.py:369: error: Name 'distro' is not defined
```
This is because mypy's logic for handling platform specific code is pretty simple and can't figure out what we're doing with `_USE_DISTRO` here. See https://mypy.readthedocs.io/en/stable/common_issues.html#python-version-and-system-platform-checks for more info.
Setting `_USE_DISTRO` to the result of `sys.platform.startswith('linux')` solves the problem without changing the overall behavior of our code here though.
This fixes part of https://github.com/certbot/certbot/issues/7803, but there's more work to be done on Windows.
I don't fully understand why, but since I updated my macbook to macOS Catalina, the test script currently fails to run for me with the versions of our dependencies we have pinned. Updating the dependencies solves the problem though and you can see Travis also successfully running tests with these new dependencies at https://travis-ci.com/certbot/certbot/builds/150573696.
I want to do what I did in https://github.com/certbot/certbot/pull/7733 to our Azure Pipelines setup, but unfortunately this isn't currently possible. The only filters available for service hooks for the "build completed" trigger are the pipeline and build status. See

To accomplish this, I propose splitting the "advanced" pipeline into two cases. One is for builds on protected branches where we want to be notified if they fail while the other is just used to manually run tests on certain branches.
This PR adds appropriate corrections to certbot dockerfile to work with the new layout (moving certbot python project in its own subdirectory).
This PR has been tested with success using the `build.sh` script on a fake v1.0 version of certbot published on my fork (https://github.com/adferrand/certbot/releases/tag/v1.0) instead of archives from `certbot/certbot`.
* Add one Dockerfile for each supported architecture
* Update multi arch hooks
* Create multi arch scripts
* Update README.md
* WIP. Use build args instead of multiple Dockerfiles in build script
* WIP. Fix typo mistake
* Use build args instead of multiple Dockerfiles in build script
* WIP. Build all the architectures in one DockerHub build
* Add arm64v8 architecture
* WIP. Testing build all the architectures in one DockerHub build
* Revert "WIP. Testing build all the architectures in one DockerHub build"
This reverts commit 94a89398a4120b183d2851ac7cb9c93db0e3d187.
* Refactor tag docker images in hooks/post_push files
* Use variables instead of positional arguments
* Export externally used variables
* Use ${variable//search/replace} instead of echo $variable | sed.
* Update README.md
* Add Cleanup in build.sh script
* Fix tagging error in post_push hook
* Add "-ex" flags to bash script
* Test tagging images in build hook
* Tagging in hook/post_build instead of hook/post_push
* Push built architecture dependent image
* Use Dockerfile argument instead of fixed value
* Fix typo
* Use parameter instead of global variable
* Use custom "hook/push" to prevent duplicated push
* Make a short doctype for each function declared in common
:warning: __Pipeline has failed__: [Link to the build](https://dev.azure.com/$(Build.Repository.ID)/_build/results?buildId=$(Build.BuildId)&view=results)\n\n\
---"
curl -i -X POST --data-urlencode "payload={\"text\":\"${MESSAGE}\"}" "$(MATTERMOST_URL)"
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from https://www.sphinx-doc.org/)
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.