Fixes#9990
If the python oneliner to check certbot's version succeeded, exit_code
would never be set, which would cause our exit_code check to fail. Use
a check that handles unset exit_code
* Print an error if outdated snap plugins detected
With Certbot 3.0 comes a bump to Python 3.12, so if any snap plugins
are still located in a python3.8 directory, print an error informing
the user.
* tox nitpicks
* personal nitpick
* review fixups
* Update certbot/certbot/_internal/snap_config.py
Co-authored-by: ohemorange <ebportnoy@gmail.com>
* Use LOGGER.warn instead of error
* warn-->warning
* warn-->warning
---------
Co-authored-by: ohemorange <ebportnoy@gmail.com>
macOS-12 is [being deprecated](https://github.com/actions/runner-images/issues/10721) on Azure, so update to the latest available version.
* Upgrade macOS azure tests to use macOS-15
* switch standard azure tests to using python 3.12
* restore mac and linux cover tests to oldest and newest version style, and add explanation that that's what we're doing.
We're a few years behind the curve on this one, but using "master" as a
programming term is a callous practice that explicitly uses the
historical institution of slavery as a cheap, racist metaphor. Switch to
using "main", as it's the new default in git and GitHub.
This is another and very minor piece of https://github.com/certbot/certbot/issues/9988.
We've done nothing to warn/migrate installations using the old `certbot-route53:auth` plugin name and installations like that still exist according to https://gist.github.com/bmw/aceb69020dceee50ba827ec17b22e08a. We could try to warn/migrate these users for a future release or decide it's niche enough that we'll just let it break, but I think it's easy enough to keep the simple shim around.
This PR just moves the code raising a deprecation warning into `_internal` as part of cleaning up all deprecation warnings I found in https://github.com/certbot/certbot/issues/9988. I manually tested this with a Certbot config using the `certbot-route53:auth` plugin name and renewal worked just fine.
The `gnupg` package from Homebrew only installs a `gpg` binary, not a `gpg2` binary. I had previously worked around this by manually creating an alias, but I think we can do better.
GPG version 1 is ancient and [hasn't seen a release since 2006](https://gnupg.org/download/release_notes.html). Additionally, `gpg` has referred to GPG 2 in Ubuntu since at least 20.04 which is the oldest non-EOL'd version as of writing this so I think this change is safe to make.
Fixes#9872, originally merged in #9956.
To upgrade to python3.12 as 3.8 is reaching EOL, we need to upgrade the core snap that certbot is based on. The latest version is core24, so we're going with that for longevity. We will want to notify third party snaps to make changes as well. They can release their snaps to a version higher than certbot's, and their users will not be upgraded until the matching (or greater) version of certbot is released. They should do this as otherwise including these changes will break their plugins.
Key documents for this migration are https://snapcraft.io/docs/migrate-core22 and https://snapcraft.io/docs/migrate-core24. The discussion at https://forum.snapcraft.io/t/upgrading-classic-snap-to-core24-using-snapcraft-8-3-causes-python-3-12-errors-at-runtime/ is also relevant to understanding some changes, which may become unnecessary in future versions of snapcraft.
* Migrate primary certbot snap to core24 and python 3.12
* Migrate plugin snaps to core24 and python 3.12
* Migrate to core24 in build_remote
* Run snap tests using python 3.12
* Unstage pyvenv.cfg and set PYTHONPATH
---------
Co-authored-by: Erica Portnoy <ebportnoy@gmail.com>
Co-authored-by: Erica Portnoy <erica@eff.org>
Recently our test environments were upgraded to use Docker 26, which
enabled ipv6 loopback by default in containers. This caused tests to
start failing due to an nginx test config which was the sole listener
for ipv6.
This simply removes that ipv6 listen directive in the config, and the
archived version we use for testing.
while working on https://github.com/certbot/certbot/issues/9938, i updated our dependencies which updated mypy introducing new errors that mypy wanted me to fix. i think this makes the regularly necessary process of updating our dependencies too tedious and we should instead pin our linters that do this to a specific version and update them manually as desired. we already do this with pylint in the lines above my changes in this PR for the same reason
This fixes a bug where, when a user requests a cert interactively, the
CSR's SANs are not listed in the order that the user has in mind. This
is because, during the input validation, the _scrub_checklist_input
method does not produce a list of tags (which represents the domain
names the user has requested a cert for) in the order of in the given
indices. As a result, the CN of the resulting cert, as well as the
directory name used to store the certs, may not always be what the user
has expected, which should be the first item chosen from the interactive
prompt.
Pebble 2.5.1 supports OCSP stapling, so we can finally replace all boulder tests/harnesses with the much simpler pebble setup.
Closes#9898
* Remove unused `--acme-server` argument
Since this argument is never set and always defaults to 'pebble', just
remove it to simplify assumptions about which test server's being used.
* Remove boulder option from integration tests
Now that pebble supports all of our test cases, we can move off of
the much more complicated boulder test harness.
* pebble_artifacts: bump to latest pebble release
* pebble_artifacts: fix download path
* certbot-ci: unzip pebble assets
* CI: rip out windows tests/jobs
* tox.ini: rm outdated Windows comment
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* ci: rm redundant integration test
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* acme_server: raise error if proxy and http-01 port are both set
* acme_server: rm vestigial preterimate commands stuff
---------
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Azure recently dropped the `docker-compose` standalone executable (aka
docker-compose v1), and since it's not receiving updates anymore, let's
get with the times and update to v2 as well.
* Reset requirements.txt path
* Add requirements.txt path
* Test config path
* Change docs path
* Amend paths for successful builds
* Place copyright for epub
- Will amend copyright parameter at a later date
* Make reconfigure use staging server
* lint and imports
* Unset the account if it's been set in preparation for a dry run
* Add unit tests for checking we switch to staging and don't accidentally modify anything else
* add docstring
* Add test to make sure a requested new account id is saved
* update changelog
* set noninteractive mode for dry run
* error when account or server is set by the user
* switch to checking for changed values in account and server
* recommend using renew instead of certonly for forbidden fields
* change link to renew-reconfiguration
* Set the delegated field in Lexicon config to bypass subdomain resolution (#9821)
The Lexicon-based DNS plugins use a mechanism to determine which actual segment of the input domain is actually the DNS zone in which the DNS-01 challenge has to be initiated (eg. `subdomain.domain.com` or `domain.com` for input `subdomain.domain.com`): they tries recursively to configure Lexicon and initiate authentication from the most specific to most generic domain segment, and select the first segment where Lexicon stop erroring out.
This mechanism broke with #9746 because now the plugins call Lexicon client instead of the underlying providers, and the client makes guess on the actual domain requested. Typically for `subdomain.domain.com` it will actually try to authenticate against `domain.com`, and so the mechanism above does not work anymore.
This PR fixes the issue by using the `delegated` field in Lexicon config each time the plugin needs it. This field is designed for this kind of purpose: it will instruct Lexicon what is the actual DNS zone domain instead of guessing it.
I tested the change with one of my OVH account. The expected behavior is re-established and the plugin is able to test `subdomain.domain.com` then `domain.com` as before.
Fixes#9791Fixes#9818
(cherry picked from commit cf4f07d17e)
* add changelog entry for 9821 (#9822)
(cherry picked from commit 7bb85f8440)
---------
Co-authored-by: Adrien Ferrand <adferrand@users.noreply.github.com>
The Lexicon-based DNS plugins use a mechanism to determine which actual segment of the input domain is actually the DNS zone in which the DNS-01 challenge has to be initiated (eg. `subdomain.domain.com` or `domain.com` for input `subdomain.domain.com`): they tries recursively to configure Lexicon and initiate authentication from the most specific to most generic domain segment, and select the first segment where Lexicon stop erroring out.
This mechanism broke with #9746 because now the plugins call Lexicon client instead of the underlying providers, and the client makes guess on the actual domain requested. Typically for `subdomain.domain.com` it will actually try to authenticate against `domain.com`, and so the mechanism above does not work anymore.
This PR fixes the issue by using the `delegated` field in Lexicon config each time the plugin needs it. This field is designed for this kind of purpose: it will instruct Lexicon what is the actual DNS zone domain instead of guessing it.
I tested the change with one of my OVH account. The expected behavior is re-established and the plugin is able to test `subdomain.domain.com` then `domain.com` as before.
Fixes#9791Fixes#9818
* helpful: fix handling of abbreviated ConfigArgparse arguments (#9796)
* helpful: fix handling of abbreviated ConfigArgparse arguments
ConfigArgparse allows for "abbreviated" arguments, i.e. just the prefix
of an argument, but it doesn't set the argument sources in these cases.
This commit checks for those cases and sets the sources appropriately.
* failing to find an action raises an error instead of logging
* Update changelog
* Add handling for short arguments, fix equals sign handling
These were silently being dropped before, possibly leading to instances
of `NamespaceConfig.set_by_user()` returning false negatives.
(cherry picked from commit 11e17ef77b)
* Fix finish_release.py (#9800)
* response is value
* rename vars
(cherry picked from commit a96fb4b6ce)
* Merge pull request #9762 from certbot/docs/yaml-config
Add YAML files for Readthedocs requirements
(cherry picked from commit 44046c70c3)
* Update Lexicon requirements to stabilize certbot-dns-ovh behavior (#9802)
* Update minimum Lexicon version required for certbot-dns-ovh
* Add types
* FIx mypy
* Fix lint
* Fix BOTH lint and mypy
(cherry picked from commit 5cf5f36f19)
* simplify code (#9807)
(cherry picked from commit 6f7b5ab1cd)
* Include linting fixes from 8a95c03
---------
Co-authored-by: Will Greenberg <willg@eff.org>
Co-authored-by: Alexis <alexis@eff.org>
Co-authored-by: Adrien Ferrand <adferrand@users.noreply.github.com>
* helpful: fix handling of abbreviated ConfigArgparse arguments
ConfigArgparse allows for "abbreviated" arguments, i.e. just the prefix
of an argument, but it doesn't set the argument sources in these cases.
This commit checks for those cases and sets the sources appropriately.
* failing to find an action raises an error instead of logging
* Update changelog
* Add handling for short arguments, fix equals sign handling
These were silently being dropped before, possibly leading to instances
of `NamespaceConfig.set_by_user()` returning false negatives.
* Drop Python 3.7 support
* Fix lint and test
* Check for venv generation
* Update requirements
* Update oldest constaints and compatibility tests runtime
* Refactor Lexicon-based DNS plugins and upgrade minimal version of Lexicon
* Relax filterwarning to comply with envs where boto3 is not installed
* Update pinned dependencies
* Use our previous method to deprecate part of modules
* Safe import internally
* Add changelog
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Migrate entrypoint logic from pkg_resources to importlib.metadata
* Usage of importlib_metadata up to Python 3.9 to align API behavior to Python 3.10
---------
Co-authored-by: Adrien Ferrand <adrien.ferrand@amadeus.com>
Co-authored-by: Adrien Ferrand <adrien.ferrand@arteris.com>
* Migrate pkg_resources API related to resources to importlib_resources
* Fix lint and mypy + pin lexicon
* Update filterwarnings
* Update oldest tests requirements
* Update pinned dependencies
* Fix for modern versions of python
* Fix assets load in nginx integration tests
* Fix a warning
* Isolate static generation from importlib.resource into a private function
---------
Co-authored-by: Adrien Ferrand <adrien.ferrand@amadeus.com>
* dns-google: fix condition to don't use private dns zones
* update MD
* Fix condition
* fix condition
* update testdata
* fix identation
* update tests
* update changelog
* Update dns_google.py
* add test for split horizon dns google
* add dnsName to managed zones
* update quickstart and remove os import
* simplify theme use
* list sphinx_rtd_theme as extension
Our docs builds failed last night, presumably because #9754 updated `sphinx_rtd_theme` which changed some unknown thing.
Looking into it, our usage of this project was very unconventional. Following the code comment I deleted in this PR to https://docs.readthedocs.io/en/stable/faq.html#i-want-to-use-the-read-the-docs-theme-locally, simple instructions are given to put the following in your `conf.py` file:
```
extensions = [
...
'sphinx_rtd_theme',
]
html_theme = "sphinx_rtd_theme"
```
I did this instead of the more complicated logic we were using and all builds passed locally. I also triggered a build on readthedocs with these changes which also passed.
This takes care of the dependabot alerts those with access can see at https://github.com/certbot/certbot/security/dependabot.
Pinning back `cython` is needed because without it, our full test suite will fail when trying to build `pyyaml` on ARM systems.
* Do not call deprecated datetime.utcnow() and datetime.utcfromtimestamp()
* Ignore DeprecationWarnings from importing dependencies
$ python3 -Wdefault
Python 3.12.0b4 (main, Jul 12 2023, 00:00:00) [GCC 13.1.1 20230614 (Red Hat 13.1.1-4)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pkg_resources
/usr/lib/python3.12/site-packages/pkg_resources/__init__.py:121: DeprecationWarning: pkg_resources is deprecated as an API
warnings.warn("pkg_resources is deprecated as an API", DeprecationWarning)
>>> import pytz
/usr/lib/python3.12/site-packages/pytz/tzinfo.py:27: DeprecationWarning: datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.fromtimestamp(timestamp, datetime.UTC).
_epoch = datetime.utcfromtimestamp(0)
* Used pytz.UTC consistently for clarity
* repin current
* repin oldest
* csr must have version set to zero
* only set PIP_USE_PEP517 for macOS
* experiment with brew update git failure workaround
* Revert change to NamespaceConfig's constructor
NamespaceConfig's argument sources dict is now set with a method,
and raises a runtime error if one isn't set when set_by_user() is
called.
* Actually update CHANGELOG to reflect the set_by_user changes
* linter appeasement
* configuration: update docs, add test
This test ensures that calling `set_by_user` without an initialized
sources dict raises a RuntimeError.
* Rewrite helpful_test to appease the linter
* Use public interface to access argparse sources dict
* HelpfulParser builds ArgumentSources dict, stores it in NamespaceConfig
After arguments/config files/user prompted input have been parsed, we
build a mapping of Namespace options to an ArgumentSource value. These
generally come from argparse's builtin "source_to_settings" dict, but
we also add a source value representing dynamic values set at runtime.
This dict is then passed to NamespaceConfig, which can then be queried
directly or via the "set_by_user" method, which replaces the global
"set_by_cli" and "option_was_set" functions.
* Use NamespaceConfig.set_by_user instead of set_by_cli/option_was_set
This involves passing the NamespaceConfig around to more functions
than before, removes the need for most of the global state shenanigans
needed by set_by_cli and friends.
* Set runtime config values on the NamespaceConfig object
This'll correctly mark them as being "runtime" values in the
ArgumentSources dict
* Bump oldest configargparse version
We need a version that has get_source_to_settings_dict()
* Add more cli unit tests, use ArgumentSource.DEFAULT by default
One of the tests revealed that ConfigArgParse's source dict excludes
arguments it considers unimportant/irrelevant. We now mark all arguments
as having a DEFAULT source by default, and update them otherwise.
* Mark more argument sources as RUNTIME
* Removes some redundant helpful_test.py, moves one to cli_test.py
We were already testing most of these cases in cli_test.py, only
with a more complete HelpfulArgumentParser setup. And since the hsts/no-hsts
test was manually performing the kind of argument adding that cli
already does out of the box, I figured the cli tests were a more natural
place for it.
* appease the linter
* Various fixups from review
* Add windows compatability fix
* Add test ensuring relevant_values behaves properly
* Build sources dict in a more predictable manner
The dict is now built in a defined order: first defaults, then config
files, then env vars, then command line args. This way we eliminate the
possibility of undefined behavior if configargparse puts an arg's entry
in multiple source dicts.
* remove superfluous update to sources dict
* remove duplicate constant defines, resolve circular import situation
* letstest: -ubuntu18.04 +centos9stream +debian11
* letstest: username for centos 9 stream is ec2-user
This is mentioned on https://centos.org/download/aws-images/
* ensure mod_ssl is installed
in centos 9 stream, apache has to be restarted after mod_ssl is
installed, or the snakeoil certificates will not be present and
apache won't start.
this also removes nghttp2 being installed as the relevant bug
is long fixed.
* dns-rfc2136: add test coverage for PR #9672
* fix compatibility with oldest dnspython
* rename test to be more descriptive
Co-authored-by: ohemorange <ebportnoy@gmail.com>
---------
Co-authored-by: ohemorange <ebportnoy@gmail.com>
This is, to my knowledge, an entirely inconsequential PR to add support for entirely novel challenge types.
Presently in the [`challb_to_achall` function](399b932a86/certbot/certbot/_internal/auth_handler.py (L367)) if the challenge type is not of a type known to certbot an error is thrown. This check is mostly pointless as an authenticator would not request a challenge unknown to it. This check does however forbid any plugins from supporting entirely novel challenges not of the key authorisation form.
* support unknown ACME challenge types
* add to changelog
* update tests
---------
Co-authored-by: Brad Warren <bmw@eff.org>
* remove pointless paragraph about --server and wildcards
* docs: update help text for --dry-run and --staging
* docs: update "Changing the ACME Server" for --dry-run
* add note about webserver reloads
* Optionally sign initial SOA query
Added configuration file option to enable signing of the initial SOA query when determining the authoritative nameserver for the zone. Default is disabled.
* Better handling of sign_query configuration and fix lint issues
* Update str casting to match 5503d12395
* Update certbot/CHANGELOG.md
Co-authored-by: alexzorin <alex@zorin.au>
* Update certbot/CHANGELOG.md
Co-authored-by: alexzorin <alex@zorin.au>
* Update dns_rfc2136.py
Updated with feedback from certbot/certbot#9672
---------
Co-authored-by: alexzorin <alex@zorin.au>
In addition to the speed improvements in CI, the speed improvements locally with both this https://github.com/certbot/certbot/pull/9666 which this builds on is even more significant. After it's been run once so it's had a chance to set up the different virtual environments, `tox` locally now takes 39 seconds on my laptop when it used to take 137 seconds.
Fixes#6127.
* Added lineage name validity check
* Verify lineage name validity before obtaining certificate
* Added linage name limitation to cli help
* Update documentation on certificate name
* Added lineage name validation to changelog
* Use filepath seperators to determine lineagename validity
* Add unittest for private choose_lineagename method
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* add some missing types
* install pkg-config
* install pkg-config for docker too
* add pkg-config to plugins
* pkg-config when cryptography may need to be built
* deps cleanup
* more comments
* more tweaks
Fixes https://github.com/certbot/certbot/issues/7921.
In all cases when we run `pip_install.py`, we first run `pipstrap.py`. This PR combines these two steps for convenience and to make always doing that less error prone. This will also help me with some of the `tox.ini` refactoring I'm planning to do.
I ran the full test suite on everything and tested the release script changes locally.
This change shouldn't have any effect on cryptography's setup because they install `certbot[test]` which depends on pip, setuptools, and wheel.
* always pipstrap
* use pip_install.py during releases
This is a first step towards implementing the plan I described at https://github.com/certbot/certbot/issues/7909#issuecomment-1448675456 which got a +1 from both Erica and Will. Similar changes for our other packages will be made in followup PRs to try and make this easier to review.
It may be helpful to look at https://github.com/certbot/certbot/pull/7600 when reviewing this PR where we did something similar in the past.
The value of `ignore-paths` in `.pylintrc` should work on Windows based on https://pylint.readthedocs.io/en/latest/user_guide/configuration/all-options.html#ignore-paths and the fact that on macOS/linux, changing path delimiters to `\` still causes these directories to be ignored.
I started testing this for mypy as well, but mypy doesn't current pass for us on Windows so I didn't bother and took this opportunity to remove it from the default environments in `tox.ini`. I'll update https://github.com/certbot/certbot/issues/7803 to mention that the value of `exclude` in `mypy.ini` may need to be tweaked if anyone works on that issue.
* make acme tests internal
* no mypy-win
Adrien and I added this is in https://github.com/certbot/certbot/pull/6590 in response to https://github.com/certbot/certbot/issues/6582 which I wrote. I now personally think these tests are way more trouble than they're worth.
In almost all cases, the versions pinned in `tools/requirements.txt` are used. The two exceptions to this that come to mind are users using OS packages and pip. In the former, the version of our dependencies is picked by the OS and do not change much on most systems. As for pip, [we only "support it on a best effort basis"](https://eff-certbot.readthedocs.io/en/stable/install.html#alternative-2-pip).
Even for pip users, I'm not convinced this buys us much other than frequent test failures. We have our tests configured to error on all Python warnings and [we regularly update `tools/requirements.txt`](https://github.com/certbot/certbot/commits/master/tools/requirements.txt). Due to that, assuming our dependencies follow normal conventions, we should have a chance to fix things in response to planned API changes long before they make their way to our users. I do not think it is necessary for our tests to break immediately after an API is deprecated.
I think almost all other failures due to these tests are caused by upstream bugs. In my experience, almost all of them will sort themselves out pretty quickly. I think that responding to those that are not or planned API changes we somehow missed can be addressed when `tools/requirements.txt` is updated or when someone opens an issue. I personally don't think blocking releases or causing our nightly tests to fail is at all worth it here. I think removing this frequent cause of test failures makes things just a little bit easier for Certbot devs without costing us much of anything.
* Add async interface for finalization to acme.client.ClientV2
Add `begin_order_finalization()`/`poll_finalization()` to
`acme.client.ClientV2`, which are directly analogous to
`answer_challenge()`/`poll_authorizations()`. This allows us to
finalize an order and then later poll for its completion as separate
steps.
* Address code review feedback
Rename `begin_order_finalization` -> `begin_finalization` and tweak
wording of changelog entry
Right now if you to_json() an `OrderResource` and later deserialize
it, the `AuthorizationResource` objects don't come back through the
round-trip (they just get de-jsonified as frozendicts and worse, they
can't even be passed to `AuthorizationResource.from_json` because
frozendicts aren't dicts). In addition, the `csr_pem` field gets
encoded as an array of integers, which definitely does not get
de-jsonified into what we want.
Fix these by adding an encoder to `authorizations` and encoder and
decoder to `csr_pem`.
`lock_test.py` is a weird, heavily customized, standalone testing relic that's giving me trouble because the name currently conflicts with `certbot/tests/lock_test.py`. Moving `certbot/tests` inside the Certbot package as discussed at https://github.com/certbot/certbot/issues/7909#issuecomment-1448675456 would avoid this, however, this is at least somewhat blocked on getting that test code passing lint and mypy checks again because we run those checks on the entirety of the Certbot package 🙃 Since `lock_test.py` could probably stand to be rewritten/refactored anyway, I took this approach.
What I did is I rewrote something largely equivalent to `lock_test.py` inside Certbot's unit tests. I chose not to do this in `certbot-ci` because its not necessary to have an ACME server available. We're no longer explicitly testing things with the nginx plugin here like we were in `lock_test.py`, however, we are checking that `prepare` is called on the plugin at the right time and I added comments about the importance of checking that we lock the directory during the call to `prepare` in the Apache and nginx test code.
As a bonus, this fixes https://github.com/certbot/certbot/issues/8121.
2023-03-15 12:54:20 -07:00
573 changed files with 5552 additions and 4684 deletions
Let's begin. All pipelines are defined in `.azure-pipelines`. Currently there are two:
*`.azure-pipelines/main.yml` is the main one, executed on PRs for master, and pushes to master,
*`.azure-pipelines/advanced.yml` add installer testing on top of the main pipeline, and is executed for `test-*` branches, release branches, and nightly run for master.
*`.azure-pipelines/main.yml` is the main one, executed on PRs for main, and pushes to main,
*`.azure-pipelines/advanced.yml` add installer testing on top of the main pipeline, and is executed for `test-*` branches, release branches, and nightly run for main.
Several templates are defined in `.azure-pipelines/templates`. These YAML files aggregate common jobs configuration that can be reused in several pipelines.
@@ -64,7 +64,7 @@ Azure Pipeline needs RW on code, RO on metadata, RW on checks, commit statuses,
RW access here is required to allow update of the pipelines YAML files from Azure DevOps interface, and to
update the status of builds and PRs on GitHub side when Azure Pipelines are triggered.
Note however that no admin access is defined here: this means that Azure Pipelines cannot do anything with
protected branches, like master, and cannot modify the security context around this on GitHub.
protected branches, like main, and cannot modify the security context around this on GitHub.
Access can be defined for all or only selected repositories, which is nice.
```
@@ -91,11 +91,11 @@ grant permissions from Azure Pipelines to GitHub in order to setup a GitHub OAut
then are way too large (admin level on almost everything), while the classic approach does not add any more
permissions, and works perfectly well.__
- Select GitHub in "Select your repository section", choose certbot/certbot in Repository, master in default branch.
- Select GitHub in "Select your repository section", choose certbot/certbot in Repository, main in default branch.
- Click on YAML option for "Select a template"
- Choose a name for the pipeline (eg. test-pipeline), and browse to the actual pipeline YAML definition in the
- [ ] The Certbot team has recently expressed interest in reviewing a PR for this. If not, this PR may be closed due our limited resources and need to prioritize how we spend them.
- [ ] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `master` section of `certbot/CHANGELOG.md` to include a description of the change being made.
- [ ] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `main` section of `certbot/CHANGELOG.md` to include a description of the change being made.
- [ ] Add or update any documentation as needed to support the changes in this PR.
- [ ] Include your name in `AUTHORS.md` if you like.
echo "{\"text\":\"## Updates Across Certbot Repos\n\n
- Certbot team members SHOULD look at: [link]($MERGED_URL)\n\n
- Certbot team members MAY also want to look at: [link]($UPDATED_URL)\n\n
- Want to Discuss something today? Place it [here](https://docs.google.com/document/d/17YMUbtC1yg6MfiTMwT8zVm9LmO-cuGVBom0qFn8XJBM/edit?usp=sharing) and we can meet today on Zoom.\n\n
- The key words SHOULD and MAY in this message are to be interpreted as described in [RFC 8147](https://www.rfc-editor.org/rfc/rfc8174). \"
- Certbot team members SHOULD look at: [link](${{ env.MERGED_URL }})
- Certbot team members MAY also want to look at: [link](${{ env.UPDATED_URL }})
- Want to Discuss something today? Place it [here](https://docs.google.com/document/d/17YMUbtC1yg6MfiTMwT8zVm9LmO-cuGVBom0qFn8XJBM/edit?usp=sharing) and we can meet today on Zoom.
- The key words SHOULD and MAY in this message are to be interpreted as described in [RFC 8147](https://www.rfc-editor.org/rfc/rfc8174).
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.