This PR changed the Auth interface signature from
`Verify(http *http.Request, w http.ResponseWriter, store DataStore, sess
SessionStore) *user_model.User`
to
`Verify(http *http.Request, w http.ResponseWriter, store DataStore, sess
SessionStore) (*user_model.User, error)`.
There is a new return argument `error` which means the verification
condition matched but verify process failed, we should stop the auth
process.
Before this PR, when return a `nil` user, we don't know the reason why
it returned `nil`. If the match condition is not satisfied or it
verified failure? For these two different results, we should have
different handler. If the match condition is not satisfied, we should
try next auth method and if there is no more auth method, it's an
anonymous user. If the condition matched but verify failed, the auth
process should be stop and return immediately.
This will fix#20563
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: Jason Song <i@wolfogre.com>
If user has reached the maximum limit of repositories:
- Before
- disallow create
- allow fork without limit
- This patch:
- disallow create
- disallow fork
- Add option `ALLOW_FORK_WITHOUT_MAXIMUM_LIMIT` (Default **true**) :
enable this allow user fork repositories without maximum number limit
fixed https://github.com/go-gitea/gitea/issues/21847
Signed-off-by: Xinyu Zhou <i@sourcehut.net>
Since we changed the /api/v1/ routes to disallow session authentication
we also removed their reliance on CSRF. However, we left the
ReverseProxy authentication here - but this means that POSTs to the API
are no longer protected by CSRF.
Now, ReverseProxy authentication is a kind of session authentication,
and is therefore inconsistent with the removal of session from the API.
This PR proposes that we simply remove the ReverseProxy authentication
from the API and therefore users of the API must explicitly use tokens
or basic authentication.
Replace #22077Close#22221Close#22077
Signed-off-by: Andrew Thornton <art27@cantab.net>
Fixes#22178
After this change upload versions with different semver metadata are
treated as the same version and trigger a duplicated version error.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
depends on #22094
Fixes https://codeberg.org/forgejo/forgejo/issues/77
The old logic did not consider `is_internal`.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Close#14601Fix#3690
Revive of #14601.
Updated to current code, cleanup and added more read/write checks.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andre Bruch <ab@andrebruch.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Norwin <git@nroo.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
`hex.EncodeToString` has better performance than `fmt.Sprintf("%x",
[]byte)`, we should use it as much as possible.
I'm not an extreme fan of performance, so I think there are some
exceptions:
- `fmt.Sprintf("%x", func(...)[N]byte())`
- We can't slice the function return value directly, and it's not worth
adding lines.
```diff
func A()[20]byte { ... }
- a := fmt.Sprintf("%x", A())
- a := hex.EncodeToString(A()[:]) // invalid
+ tmp := A()
+ a := hex.EncodeToString(tmp[:])
```
- `fmt.Sprintf("%X", []byte)`
- `strings.ToUpper(hex.EncodeToString(bytes))` has even worse
performance.
Change all license headers to comply with REUSE specification.
Fix#16132
Co-authored-by: flynnnnnnnnnn <flynnnnnnnnnn@github>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
This PR addresses #19586
I added a mutex to the upload version creation which will prevent the
push errors when two requests try to create these database entries. I'm
not sure if this should be the final solution for this problem.
I added a workaround to allow a reupload of missing blobs. Normally a
reupload is skipped because the database knows the blob is already
present. The workaround checks if the blob exists on the file system.
This should not be needed anymore with the above fix so I marked this
code to be removed with Gitea v1.20.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR adds a context parameter to a bunch of methods. Some helper
`xxxCtx()` methods got replaced with the normal name now.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Fix#19513
This PR introduce a new db method `InTransaction(context.Context)`,
and also builtin check on `db.TxContext` and `db.WithTx`.
There is also a new method `db.AutoTx` has been introduced but could be used by other PRs.
`WithTx` will always open a new transaction, if a transaction exist in context, return an error.
`AutoTx` will try to open a new transaction if no transaction exist in context.
That means it will always enter a transaction if there is no error.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: 6543 <6543@obermui.de>
In #21637 it was mentioned that the purpose of the API routes for the
packages is unclear. This PR adds some documentation.
Fix#21637
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Fix#20921.
The `ctx.Repo.GitRepo` has been used in deleting issues when the issue
is a PR.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
This PR enhances the CORS middleware usage by allowing for the headers
to be configured in `app.ini`.
Fixes#21746
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Related #20471
This PR adds global quota limits for the package registry. Settings for
individual users/orgs can be added in a seperate PR using the settings
table.
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This addresses #21707 and adds a second package test case for a
non-semver compatible version (this might be overkill though since you
could also edit the old package version to have an epoch in front and
see the error, this just seemed more flexible for the future).
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
_This is a different approach to #20267, I took the liberty of adapting
some parts, see below_
## Context
In some cases, a weebhook endpoint requires some kind of authentication.
The usual way is by sending a static `Authorization` header, with a
given token. For instance:
- Matrix expects a `Bearer <token>` (already implemented, by storing the
header cleartext in the metadata - which is buggy on retry #19872)
- TeamCity #18667
- Gitea instances #20267
- SourceHut https://man.sr.ht/graphql.md#authentication-strategies (this
is my actual personal need :)
## Proposed solution
Add a dedicated encrypt column to the webhook table (instead of storing
it as meta as proposed in #20267), so that it gets available for all
present and future hook types (especially the custom ones #19307).
This would also solve the buggy matrix retry #19872.
As a first step, I would recommend focusing on the backend logic and
improve the frontend at a later stage. For now the UI is a simple
`Authorization` field (which could be later customized with `Bearer` and
`Basic` switches):
![2022-08-23-142911](https://user-images.githubusercontent.com/3864879/186162483-5b721504-eef5-4932-812e-eb96a68494cc.png)
The header name is hard-coded, since I couldn't fine any usecase
justifying otherwise.
## Questions
- What do you think of this approach? @justusbunsi @Gusted @silverwind
- ~~How are the migrations generated? Do I have to manually create a new
file, or is there a command for that?~~
- ~~I started adding it to the API: should I complete it or should I
drop it? (I don't know how much the API is actually used)~~
## Done as well:
- add a migration for the existing matrix webhooks and remove the
`Authorization` logic there
_Closes #19872_
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: delvh <dev.lh@web.de>
I found myself wondering whether a PR I scheduled for automerge was
actually merged. It was, but I didn't receive a mail notification for it
- that makes sense considering I am the doer and usually don't want to
receive such notifications. But ideally I want to receive a notification
when a PR was merged because I scheduled it for automerge.
This PR implements exactly that.
The implementation works, but I wonder if there's a way to avoid passing
the "This PR was automerged" state down so much. I tried solving this
via the database (checking if there's an automerge scheduled for this PR
when sending the notification) but that did not work reliably, probably
because sending the notification happens async and the entry might have
already been deleted. My implementation might be the most
straightforward but maybe not the most elegant.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Fixes an 500 error/panic if using the changed PR files API with pages
that should return empty lists because there are no items anymore.
`start-end` is then < 0 which ends in panic.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: delvh <dev.lh@web.de>
I noticed an admin is not allowed to upload packages for other users
because `ctx.IsSigned` was not set.
I added a check for `user.IsActive` and `user.ProhibitLogin` too because
both was not checked. Tests enforce this now.
Co-authored-by: Lauris BH <lauris@nix.lv>
The OAuth spec [defines two types of
client](https://datatracker.ietf.org/doc/html/rfc6749#section-2.1),
confidential and public. Previously Gitea assumed all clients to be
confidential.
> OAuth defines two client types, based on their ability to authenticate
securely with the authorization server (i.e., ability to
> maintain the confidentiality of their client credentials):
>
> confidential
> Clients capable of maintaining the confidentiality of their
credentials (e.g., client implemented on a secure server with
> restricted access to the client credentials), or capable of secure
client authentication using other means.
>
> **public
> Clients incapable of maintaining the confidentiality of their
credentials (e.g., clients executing on the device used by the resource
owner, such as an installed native application or a web browser-based
application), and incapable of secure client authentication via any
other means.**
>
> The client type designation is based on the authorization server's
definition of secure authentication and its acceptable exposure levels
of client credentials. The authorization server SHOULD NOT make
assumptions about the client type.
https://datatracker.ietf.org/doc/html/rfc8252#section-8.4
> Authorization servers MUST record the client type in the client
registration details in order to identify and process requests
accordingly.
Require PKCE for public clients:
https://datatracker.ietf.org/doc/html/rfc8252#section-8.1
> Authorization servers SHOULD reject authorization requests from native
apps that don't use PKCE by returning an error message
Fixes#21299
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Previously mentioning a user would link to its profile, regardless of
whether the user existed. This change tests if the user exists and only
if it does - a link to its profile is added.
* Fixes#3444
Signed-off-by: Yarden Shoham <hrsi88@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
At the moment a repository reference is needed for webhooks. With the
upcoming package PR we need to send webhooks without a repository
reference. For example a package is uploaded to an organization. In
theory this enables the usage of webhooks for future user actions.
This PR removes the repository id from `HookTask` and changes how the
hooks are processed (see `services/webhook/deliver.go`). In a follow up
PR I want to remove the usage of the `UniqueQueue´ and replace it with a
normal queue because there is no reason to be unique.
Co-authored-by: 6543 <6543@obermui.de>
Fixes#21379
The commits are capped by `setting.UI.FeedMaxCommitNum` so
`len(commits)` is not the correct number. So this PR adds a new
`TotalCommits` field.
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
NuGet symbol file lookup returned 404 on Visual Studio 2019 due to
case-sensitive api router. The api router should accept case-insensitive GUID.
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes#21250
Related #20414
Conan packages don't have to follow SemVer.
The migration fixes the setting for all existing Conan and Generic
(#20414) packages.
Calls to ToCommit are very slow due to fetching diffs, analyzing files.
This patch lets us supply `stat` as false to speed fetching a commit
when we don't need the diff.
/v1/repo/commits has a default `stat` set as true now. Set to false to
experience fetching thousands of commits per second instead of 2-5 per
second.
This adds an api endpoint `/files` to PRs that allows to get a list of changed files.
built upon #18228, reviews there are included
closes https://github.com/go-gitea/gitea/issues/654
Co-authored-by: Anton Bracke <anton@ju60.de>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
This PR would presumably
Fix#20522Fix#18773Fix#19069Fix#21077Fix#13622
-----
1. Check whether unit type is currently enabled
2. Check if it _will_ be enabled via opt
3. Allow modification as necessary
Signed-off-by: jolheiser <john.olheiser@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
Addition to #20734, Fixes#20717
The `/index.json` endpoint needs to be accessible even if the registry
is private. The NuGet client uses this endpoint without
authentification.
The old fix only works if the NuGet cli is used with `--source <name>`
but not with `--source <url>/index.json`.
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Close#20098, in the NPM registry API, implemented to match what's described by https://github.com/npm/registry/blob/master/docs/REGISTRY-API.md#get-v1search
Currently have only implemented the bare minimum to work with the [Unity Package Manager](https://docs.unity3d.com/Manual/upm-ui.html).
Co-authored-by: Jack Vine <jackv@jack-lemur-suse.cat-prometheus.ts.net>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
The PyPI name regexp is too restrictive and only permits lowercase characters. This PR adjusts the regexp to add in support for uppercase characters.
Fix#21014
Add support for triggering webhook notifications on wiki changes.
This PR contains frontend and backend for webhook notifications on wiki actions (create a new page, rename a page, edit a page and delete a page). The frontend got a new checkbox under the Custom Event -> Repository Events section. There is only one checkbox for create/edit/rename/delete actions, because it makes no sense to separate it and others like releases or packages follow the same schema.
![image](https://user-images.githubusercontent.com/121972/177018803-26851196-831f-4fde-9a4c-9e639b0e0d6b.png)
The actions itself are separated, so that different notifications will be executed (with the "action" field). All the webhook receivers implement the new interface method (Wiki) and the corresponding tests.
When implementing this, I encounter a little bug on editing a wiki page. Creating and editing a wiki page is technically the same action and will be handled by the ```updateWikiPage``` function. But the function need to know if it is a new wiki page or just a change. This distinction is done by the ```action``` parameter, but this will not be sent by the frontend (on form submit). This PR will fix this by adding the ```action``` parameter with the values ```_new``` or ```_edit```, which will be used by the ```updateWikiPage``` function.
I've done integration tests with matrix and gitea (http).
![image](https://user-images.githubusercontent.com/121972/177018795-eb5cdc01-9ba3-483e-a6b7-ed0e313a71fb.png)
Fix#16457
Signed-off-by: Aaron Fischer <mail@aaron-fischer.net>
When migrating add several more important sanity checks:
* SHAs must be SHAs
* Refs must be valid Refs
* URLs must be reasonable
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <matti@mdranta.net>
The webhook payload should use the right ref when it‘s specified in the testing request.
The compare URL should not be empty, a URL like `compare/A...A` seems useless in most cases but is helpful when testing.
* fix hard-coded timeout and error panic in API archive download endpoint
This commit updates the `GET /api/v1/repos/{owner}/{repo}/archive/{archive}`
endpoint which prior to this PR had a couple of issues.
1. The endpoint had a hard-coded 20s timeout for the archiver to complete after
which a 500 (Internal Server Error) was returned to client. For a scripted
API client there was no clear way of telling that the operation timed out and
that it should retry.
2. Whenever the timeout _did occur_, the code used to panic. This was caused by
the API endpoint "delegating" to the same call path as the web, which uses a
slightly different way of reporting errors (HTML rather than JSON for
example).
More specifically, `api/v1/repo/file.go#GetArchive` just called through to
`web/repo/repo.go#Download`, which expects the `Context` to have a `Render`
field set, but which is `nil` for API calls. Hence, a `nil` pointer error.
The code addresses (1) by dropping the hard-coded timeout. Instead, any
timeout/cancelation on the incoming `Context` is used.
The code addresses (2) by updating the API endpoint to use a separate call path
for the API-triggered archive download. This avoids producing HTML-errors on
errors (it now produces JSON errors).
Signed-off-by: Peter Gardfjäll <peter.gardfjall.work@gmail.com>
The recovery, API, Web and package frameworks all create their own HTML
Renderers. This increases the memory requirements of Gitea
unnecessarily with duplicate templates being kept in memory.
Further the reloading framework in dev mode for these involves locking
and recompiling all of the templates on each load. This will potentially
hide concurrency issues and it is inefficient.
This PR stores the templates renderer in the context and stores this
context in the NormalRoutes, it then creates a fsnotify.Watcher
framework to watch files.
The watching framework is then extended to the mailer templates which
were previously not being reloaded in dev.
Then the locales are simplified to a similar structure.
Fix#20210Fix#20211Fix#20217
Signed-off-by: Andrew Thornton <art27@cantab.net>
Add code to test if GetAttachmentByID returns an ErrAttachmentNotExist error
and return NotFound instead of InternalServerError
Fix#20884
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
The use of `--follow` makes getting these commits very slow on large repositories
as it results in searching the whole commit tree for a blob.
Now as nice as the results of `--follow` are, I am uncertain whether it is really
of sufficient importance to keep around.
Fix#20764
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Added support for Pub packages.
* Update docs/content/doc/packages/overview.en-us.md
Co-authored-by: Gergely Nagy <algernon@users.noreply.github.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gergely Nagy <algernon@users.noreply.github.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
- Add a new push mirror to specific repository
- Sync now ( send all the changes to the configured push mirrors )
- Get list of all push mirrors of a repository
- Get a push mirror by ID
- Delete push mirror by ID
Signed-off-by: Mohamed Sekour <mohamed.sekour@exfo.com>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
* Add latest commit's SHA to content response
- When requesting the contents of a filepath, add the latest commit's
SHA to the requested file.
- Resolves#12840
* Add swagger
* Fix NPE
* Fix tests
* Hook into LastCommitCache
* Move AddLastCommitCache to a common nogogit and gogit file
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Prevent NPE
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
There are existing packages out there whose version do not conform to SemVer, yet, one would like to have them available in a generic package repository. To this end, remove the SemVer restriction on package versions when using the Generic package registry, and replace it with a check that simply makes sure the version isn't empty.
Signed-off-by: Gergely Nagy <me@gergo.csillger.hu>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: 6543 <6543@obermui.de>
The LastCommitCache code is a little complex and there is unnecessary
duplication between the gogit and nogogit variants.
This PR adds the LastCommitCache as a field to the git.Repository and
pre-creates it in the ReferencesGit helpers etc. There has been some
simplification and unification of the variant code.
Signed-off-by: Andrew Thornton <art27@cantab.net>
When you create a new release(e.g. via Tea) and specify a tag that already exists on
the repository, Gitea will instead use the `UpdateRelease`
functionality. However it currently doesn't set the Target field. This
PR fixes that.
[spectral](https://github.com/stoplightio/spectral) lints
openapi/swagger files for mistakes of which it has identified a few and
which I've fixed.
I had to put it into `lint-frontend` because it depends on node_modules
so can not run on Drone during the backend target. I plan to refactor
these targets later to `lint-js` and `lint-go` so that they are
categorized based on the tool dependencies.
Support synchronizing with the push mirrors whenever new commits are pushed or synced from pull mirror.
Related Issues: #18220
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Users who are following or being followed by a user should only be
displayed if the viewing user can see them.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Check if project has the same repository id with issue when assign project to issue
* Check if issue's repository id match project's repository id
* Add more permission checking
* Remove invalid argument
* Fix errors
* Add generic check
* Remove duplicated check
* Return error + add check for new issues
* Apply suggestions from code review
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: 6543 <6543@obermui.de>
* go.mod: add go-fed/{httpsig,activity/pub,activity/streams} dependency
go get github.com/go-fed/activity/streams@master
go get github.com/go-fed/activity/pub@master
go get github.com/go-fed/httpsig@master
* activitypub: implement /api/v1/activitypub/user/{username} (#14186)
Return informations regarding a Person (as defined in ActivityStreams
https://www.w3.org/TR/activitystreams-vocabulary/#dfn-person).
Refs: https://github.com/go-gitea/gitea/issues/14186
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: add the public key to Person (#14186)
Refs: https://github.com/go-gitea/gitea/issues/14186
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: go-fed conformant Clock instance
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: signing http client
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: implement the ReqSignature middleware
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: hack_16834
Signed-off-by: Loïc Dachary <loic@dachary.org>
* Fix CI checks-backend errors with go mod tidy
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Change 2021 to 2022, properly format package imports
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Run make fmt and make generate-swagger
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Use Gitea JSON library, add assert for pkp
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Run make fmt again, fix err var redeclaration
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Remove LogSQL from ActivityPub person test
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Assert if json.Unmarshal succeeds
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Cleanup, handle invalid usernames for ActivityPub person GET request
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Rename hack_16834 to user_settings
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Use the httplib module instead of http for GET requests
* Clean up whitespace with make fmt
* Use time.RFC1123 and make the http.Client proxy-aware
* Check if digest algo is supported in setting module
* Clean up some variable declarations
* Remove unneeded copy
* Use system timezone instead of setting.DefaultUILocation
* Use named constant for httpsigExpirationTime
* Make pubKey IRI #main-key instead of /#main-key
* Move /#main-key to #main-key in tests
* Implemented Webfinger endpoint.
* Add visible check.
* Add user profile as alias.
* Add actor IRI and remote interaction URL to WebFinger response
* fmt
* Fix lint errors
* Use go-ap instead of go-fed
* Run go mod tidy to fix missing modules in go.mod and go.sum
* make fmt
* Convert remaining code to go-ap
* Clean up go.sum
* Fix JSON unmarshall error
* Fix CI errors by adding @context to Person() and making sure types match
* Correctly decode JSON in api_activitypub_person_test.go
* Force CI rerun
* Fix TestActivityPubPersonInbox segfault
* Fix lint error
* Use @mariusor's suggestions for idiomatic go-ap usage
* Correctly add inbox/outbox IRIs to person
* Code cleanup
* Remove another LogSQL from ActivityPub person test
* Move httpsig algos slice to an init() function
* Add actor IRI and remote interaction URL to WebFinger response
* Update TestWebFinger to check for ActivityPub IRI in aliases
* make fmt
* Force CI rerun
* WebFinger: Add CORS header and fix Href -> Template for remote interactions
The CORS header is needed due to https://datatracker.ietf.org/doc/html/rfc7033#section-5 and fixes some Peertube <-> Gitea federation issues
* make lint-backend
* Make sure Person endpoint has Content-Type application/activity+json and includes PreferredUsername, URL, and Icon
Setting the correct Content-Type is essential for federating with Mastodon
* Use UTC instead of GMT
* Rename pkey to pubKey
* Make sure HTTP request Date in GMT
* make fmt
* dont drop err
* Make sure API responses always refer to username in original case
Copied from what I wrote on #19133 discussion: Handling username case is a very tricky issue and I've already encountered a Mastodon <-> Gitea federation bug due to Gitea considering Ta180m and ta180m to be the same user while Mastodon thinks they are two different users. I think the best way forward is for Gitea to only use the original case version of the username for federation so other AP software don't get confused.
* Move httpsig algs constant slice to modules/setting/federation.go
* Add new federation settings to app.example.ini and config-cheat-sheet
* Return if marshalling error
* Make sure Person IRIs are generated correctly
This commit ensures that if the setting.AppURL is something like "http://127.0.0.1:42567" (like in the integration tests), a trailing slash will be added after that URL.
* If httpsig verification fails, fix Host header and try again
This fixes a very rare bug when Gitea and another AP server (confirmed to happen with Mastodon) are running on the same machine, Gitea fails to verify incoming HTTP signatures. This is because the other AP server creates the sig with the public Gitea domain as the Host. However, when Gitea receives the request, the Host header is instead localhost, so the signature verification fails. Manually changing the host header to the correct value and trying the veification again fixes the bug.
* Revert "If httpsig verification fails, fix Host header and try again"
This reverts commit f53e46c721a037c55facb9200106a6b491bf834c.
The bug was actually caused by nginx messing up the Host header when reverse-proxying since I didn't have the line `proxy_set_header Host $host;` in my nginx config for Gitea.
* Go back to using ap.IRI to generate inbox and outbox IRIs
* use const for key values
* Update routers/web/webfinger.go
* Use ctx.JSON in Person response to make code cleaner
* Revert "Use ctx.JSON in Person response to make code cleaner"
This doesn't work because the ctx.JSON() function already sends the response out and it's too late to edit the headers.
This reverts commit 95aad988975be3393c76094864ed6ba962157e0c.
* Use activitypub.ActivityStreamsContentType for Person response Content Type
* Limit maximum ActivityPub request and response sizes to a configurable setting
* Move setting key constants to models/user/setting_keys.go
* Fix failing ActivityPubPerson integration test by checking the correct field for username
* Add a warning about changing settings that can break federation
* Add better comments
* Don't multiply Federation.MaxSize by 1<<20 twice
* Add more better comments
* Fix failing ActivityPubMissingPerson test
We now use ctx.ContextUser so the message printed out when a user does not exist is slightly different
* make generate-swagger
For some reason I didn't realize that /templates/swagger/v1_json.tmpl was machine-generated by make generate-swagger... I've been editing it by hand for three months! 🤦
* Move getting the RFC 2616 time to a separate function
* More code cleanup
* Update go-ap to fix empty liked collection and removed unneeded HTTP headers
* go mod tidy
* Add ed25519 to httpsig algorithms
* Use go-ap/jsonld to add @context and marshal JSON
* Change Gitea user agent from the default to Gitea/Version
* Use ctx.ServerError and remove all remote interaction code from webfinger.go
* Move access and repo permission to models/perm/access
* fix test
* fix git test
* Move functions sequence
* Some improvements per @KN4CK3R and @delvh
* Move issues related code to models/issues
* Move some issues related sub package
* Merge
* Fix test
* Fix test
* Fix test
* Fix test
* Rename some files
* Move access and repo permission to models/perm/access
* fix test
* Move some git related files into sub package models/git
* Fix build
* fix git test
* move lfs to sub package
* move more git related functions to models/git
* Move functions sequence
* Some improvements per @KN4CK3R and @delvh
* Move some repository related code into sub package
* Move more repository functions out of models
* Fix lint
* Some performance optimization for webhooks and others
* some refactors
* Fix lint
* Fix
* Update modules/repository/delete.go
Co-authored-by: delvh <dev.lh@web.de>
* Fix test
* Merge
* Fix test
* Fix test
* Fix test
* Fix test
Co-authored-by: delvh <dev.lh@web.de>
Fixes#12338
This allows use to talk to the API with our ssh certificate (and/or ssh-agent) without needing to fetch an API key or tokens.
It will just automatically work when users have added their ssh principal in gitea.
This needs client code in tea
Update: also support normal pubkeys
ref: https://tools.ietf.org/html/draft-cavage-http-signatures
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Add LFS API
* Update routers/api/v1/repo/file.go
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Apply suggestions
* Apply suggestions
* Update routers/api/v1/repo/file.go
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Report errors
* ADd test
* Use own repo for test
* Use different repo name
* Improve handling
* Slight restructures
1. Avoid reading the blob data multiple times
2. Ensure that caching is only checked when about to serve the blob/lfs
3. Avoid nesting by returning early
4. Make log message a bit more clear
5. Ensure that the dataRc is closed by defer when passed to ServeData
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Although the use of LastModified dates for caching of git objects should be
discouraged (as it is not native to git - and there are a LOT of ways this
could be incorrect) - LastModified dates can be a helpful somewhat more human
way of caching for simple cases.
This PR adds this header and handles the If-Modified-Since header to the /raw/
routes.
Fix#18354
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
* Fix indention
Signed-off-by: kolaente <k@knt.li>
* Add option to merge a pr right now without waiting for the checks to succeed
Signed-off-by: kolaente <k@knt.li>
* Fix lint
Signed-off-by: kolaente <k@knt.li>
* Add scheduled pr merge to tables used for testing
Signed-off-by: kolaente <k@knt.li>
* Add status param to make GetPullRequestByHeadBranch reusable
Signed-off-by: kolaente <k@knt.li>
* Move "Merge now" to a seperate button to make the ui clearer
Signed-off-by: kolaente <k@knt.li>
* Update models/scheduled_pull_request_merge.go
Co-authored-by: 赵智超 <1012112796@qq.com>
* Update web_src/js/index.js
Co-authored-by: 赵智超 <1012112796@qq.com>
* Update web_src/js/index.js
Co-authored-by: 赵智超 <1012112796@qq.com>
* Re-add migration after merge
* Fix frontend lint
* Fix version compare
* Add vendored dependencies
* Add basic tets
* Make sure the api route is capable of scheduling PRs for merging
* Fix comparing version
* make vendor
* adopt refactor
* apply suggestion: User -> Doer
* init var once
* Fix Test
* Update templates/repo/issue/view_content/comments.tmpl
* adopt
* nits
* next
* code format
* lint
* use same name schema; rm CreateUnScheduledPRToAutoMergeComment
* API: can not create schedule twice
* Add TestGetBranchNamesForSha
* nits
* new go routine for each pull to merge
* Update models/pull.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* Update models/scheduled_pull_request_merge.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* fix & add renaming sugestions
* Update services/automerge/pull_auto_merge.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* fix conflict relicts
* apply latest refactors
* fix: migration after merge
* Update models/error.go
Co-authored-by: delvh <dev.lh@web.de>
* Update options/locale/locale_en-US.ini
Co-authored-by: delvh <dev.lh@web.de>
* Update options/locale/locale_en-US.ini
Co-authored-by: delvh <dev.lh@web.de>
* adapt latest refactors
* fix test
* use more context
* skip potential edgecases
* document func usage
* GetBranchNamesForSha() -> GetRefsBySha()
* start refactoring
* ajust to new changes
* nit
* docu nit
* the great check move
* move checks for branchprotection into own package
* resolve todo now ...
* move & rename
* unexport if posible
* fix
* check if merge is allowed before merge on scheduled pull
* debugg
* wording
* improve SetDefaults & nits
* NotAllowedToMerge -> DisallowedToMerge
* fix test
* merge files
* use package "errors"
* merge files
* add string names
* other implementation for gogit
* adapt refactor
* more context for models/pull.go
* GetUserRepoPermission use context
* more ctx
* use context for loading pull head/base-repo
* more ctx
* more ctx
* models.LoadIssueCtx()
* models.LoadIssueCtx()
* Handle pull_service.Merge in one DB transaction
* add TODOs
* next
* next
* next
* more ctx
* more ctx
* Start refactoring structure of old pull code ...
* move code into new packages
* shorter names ... and finish **restructure**
* Update models/branches.go
Co-authored-by: zeripath <art27@cantab.net>
* finish UpdateProtectBranch
* more and fix
* update datum
* template: use "svg" helper
* rename prQueue 2 prPatchCheckerQueue
* handle automerge in queue
* lock pull on git&db actions ...
* lock pull on git&db actions ...
* add TODO notes
* the regex
* transaction in tests
* GetRepositoryByIDCtx
* shorter table name and lint fix
* close transaction bevore notify
* Update models/pull.go
* next
* CheckPullMergable check all branch protections!
* Update routers/web/repo/pull.go
* CheckPullMergable check all branch protections!
* Revert "PullService lock via pullID (#19520)" (for now...)
This reverts commit 6cde7c9159a5ea75a10356feb7b8c7ad4c434a9a.
* Update services/pull/check.go
* Use for a repo action one database transaction
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: delvh <dev.lh@web.de>
* Update services/issue/status.go
Co-authored-by: delvh <dev.lh@web.de>
* Update services/issue/status.go
Co-authored-by: delvh <dev.lh@web.de>
* use db.WithTx()
* gofmt
* make pr.GetDefaultMergeMessage() context aware
* make MergePullRequestForm.SetDefaults context aware
* use db.WithTx()
* pull.SetMerged only with context
* fix deadlock in `test-sqlite\#TestAPIBranchProtection`
* dont forget templates
* db.WithTx allow to set the parentCtx
* handle db transaction in service packages but not router
* issue_service.ChangeStatus just had caused another deadlock :/
it has to do something with how notification package is handled
* if we merge a pull in one database transaktion, we get a lock, because merge infoce internal api that cant handle open db sessions to the same repo
* ajust to current master
* Apply suggestions from code review
Co-authored-by: delvh <dev.lh@web.de>
* dont open db transaction in router
* make generate-swagger
* one _success less
* wording nit
* rm
* adapt
* remove not needed test files
* rm less diff & use attr in JS
* ...
* Update services/repository/files/commit.go
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* ajust db schema for PullAutoMerge
* skip broken pull refs
* more context in error messages
* remove webUI part for another pull
* remove more WebUI only parts
* API: add CancleAutoMergePR
* Apply suggestions from code review
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* fix lint
* Apply suggestions from code review
* cancle -> cancel
Co-authored-by: delvh <dev.lh@web.de>
* change queue identifyer
* fix swagger
* prevent nil issue
* fix and dont drop error
* as per @zeripath
* Update integrations/git_test.go
Co-authored-by: delvh <dev.lh@web.de>
* Update integrations/git_test.go
Co-authored-by: delvh <dev.lh@web.de>
* more declarative integration tests (dedup code)
* use assert.False/True helper
Co-authored-by: 赵智超 <1012112796@qq.com>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* Apply DefaultUserIsRestricted in CreateUser
* Enforce system defaults in CreateUser
Allow for overwrites with CreateUserOverwriteOptions
* Fix compilation errors
* Add "restricted" option to create user command
* Add "restricted" option to create user admin api
* Respect default setting.Service.RegisterEmailConfirm and setting.Service.RegisterManualConfirm where needed
* Revert "Respect default setting.Service.RegisterEmailConfirm and setting.Service.RegisterManualConfirm where needed"
This reverts commit ee95d3e8dc9e9fff4fa66a5111e4d3930280e033.
Targeting #14936, #15332
Adds a collaborator permissions API endpoint according to GitHub API: https://docs.github.com/en/rest/collaborators/collaborators#get-repository-permissions-for-a-user to retrieve a collaborators permissions for a specific repository.
### Checks the repository permissions of a collaborator.
`GET` `/repos/{owner}/{repo}/collaborators/{collaborator}/permission`
Possible `permission` values are `admin`, `write`, `read`, `owner`, `none`.
```json
{
"permission": "admin",
"role_name": "admin",
"user": {}
}
```
Where `permission` and `role_name` hold the same `permission` value and `user` is filled with the user API object. Only admins are allowed to use this API endpoint.
* Don't error when branch's commit doesn't exist
- If one of the branches no longer exists, don't throw an error, it's possible that the branch was destroyed during the process. Simply skip it and disregard it.
- Resolves#19541
* Don't send empty objects
* Use more minimal approach
Adds a feature [like GitHub has](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork) (step 7).
If you create a new PR from a forked repo, you can select (and change later, but only if you are the PR creator/poster) the "Allow edits from maintainers" option.
Then users with write access to the base branch get more permissions on this branch:
* use the update pull request button
* push directly from the command line (`git push`)
* edit/delete/upload files via web UI
* use related API endpoints
You can't merge PRs to this branch with this enabled, you'll need "full" code write permissions.
This feature has a pretty big impact on the permission system. I might forgot changing some things or didn't find security vulnerabilities. In this case, please leave a review or comment on this PR.
Closes#17728
Co-authored-by: 6543 <6543@obermui.de>
* Improve dashboard's repo list performance
- Avoid a lot of database lookups for all the repo's, by adding a
undocumented "minimal" mode for this specific task, which returns the
data that's only needed by this list which doesn't require any database
lookups.
- Makes fetching these list faster.
- Less CPU overhead when a user visits home page.
* Refactor javascript code + fix Fork icon
- Use async in the function so we can use `await`.
- Remove `archivedFilter` check for count, as it doesn't make sense to
show the count of repos when you can't even see them(as they are
filited away).
* Add `count_only`
* Remove uncessary code
* Improve comment
Co-authored-by: delvh <dev.lh@web.de>
* Update web_src/js/components/DashboardRepoList.js
Co-authored-by: delvh <dev.lh@web.de>
* Update web_src/js/components/DashboardRepoList.js
Co-authored-by: delvh <dev.lh@web.de>
* By default apply minimal mode
* Remove `minimal` paramater
* Refactor count header
* Simplify init
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
- Don't panic on `ErrEmailInvalid`, this was caused due that we were
trying to force `ErrEmailCharIsNotSupported` interface, which panics.
- Resolves#19397
When a mirror repo interval is updated by the UI it is rescheduled with that interval
however the API does not do this. The API also lacks the enable_prune option.
This PR adds this functionality in to the API Edit Repo endpoint.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Reusing `/api/v1` from Gitea UI Pages have pros and cons.
Pros:
1) Less code copy
Cons:
1) API/v1 have to support shared session with page requests.
2) You need to consider for each other when you want to change something about api/v1 or page.
This PR moves all dependencies to API/v1 from UI Pages.
Partially replace #16052
* An attempt to sync a non-mirror repo must give 400 (Bad Request)
* add missing return statement
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>