#28361 introduced `syncBranchToDB` in `CreateNewBranchFromCommit`. This
PR will revert the change because it's unnecessary. Every push will
already be checked by `syncBranchToDB`.
This PR also created a test to ensure it's right.
This is a regression from #28220 .
`builder.Cond` will not add `` ` `` automatically but xorm method
`Get/Find` adds `` ` ``.
This PR also adds tests to prevent the method from being implemented
incorrectly. The tests are added in `integrations` to test every
database.
Introduce the new generic deletion methods
- `func DeleteByID[T any](ctx context.Context, id int64) (int64, error)`
- `func DeleteByIDs[T any](ctx context.Context, ids ...int64) error`
- `func Delete[T any](ctx context.Context, opts FindOptions) (int64,
error)`
So, we no longer need any specific deletion method and can just use
the generic ones instead.
Replacement of #28450Closes#28450
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
The CORS code has been unmaintained for long time, and the behavior is
not correct.
This PR tries to improve it. The key point is written as comment in
code. And add more tests.
Fix#28515Fix#27642Fix#17098
- Remove `ObjectFormatID`
- Remove function `ObjectFormatFromID`.
- Use `Sha1ObjectFormat` directly but not a pointer because it's an
empty struct.
- Store `ObjectFormatName` in `repository` struct
Refactor Hash interfaces and centralize hash function. This will allow
easier introduction of different hash function later on.
This forms the "no-op" part of the SHA256 enablement patch.
Fix#28056
This PR will check whether the repo has zero branch when pushing a
branch. If that, it means this repository hasn't been synced.
The reason caused that is after user upgrade from v1.20 -> v1.21, he
just push branches without visit the repository user interface. Because
all repositories routers will check whether a branches sync is necessary
but push has not such check.
For every repository, it has two states, synced or not synced. If there
is zero branch for a repository, then it will be assumed as non-sync
state. Otherwise, it's synced state. So if we think it's synced, we just
need to update branch/insert new branch. Otherwise do a full sync. So
that, for every push, there will be almost no extra load added. It's
high performance than yours.
For the implementation, we in fact will try to update the branch first,
if updated success with affect records > 0, then all are done. Because
that means the branch has been in the database. If no record is
affected, that means the branch does not exist in database. So there are
two possibilities. One is this is a new branch, then we just need to
insert the record. Another is the branches haven't been synced, then we
need to sync all the branches into database.
The function `GetByBean` has an obvious defect that when the fields are
empty values, it will be ignored. Then users will get a wrong result
which is possibly used to make a security problem.
To avoid the possibility, this PR removed function `GetByBean` and all
references.
And some new generic functions have been introduced to be used.
The recommand usage like below.
```go
// if query an object according id
obj, err := db.GetByID[Object](ctx, id)
// query with other conditions
obj, err := db.Get[Object](ctx, builder.Eq{"a": a, "b":b})
```
System users (Ghost, ActionsUser, etc) have a negative id and may be the
author of a comment, either because it was created by a now deleted user
or via an action using a transient token.
The GetPossibleUserByID function has special cases related to system
users and will not fail if given a negative id.
Refs: https://codeberg.org/forgejo/forgejo/issues/1425
(cherry picked from commit 6a2d2fa24390116d31ae2507c0a93d423f690b7b)
Fixes#27819
We have support for two factor logins with the normal web login and with
basic auth. For basic auth the two factor check was implemented at three
different places and you need to know that this check is necessary. This
PR moves the check into the basic auth itself.
Fixes#27598
In #27080, the logic for the tokens endpoints were updated to allow
admins to create and view tokens in other accounts. However, the same
functionality was not added to the DELETE endpoint. This PR makes the
DELETE endpoint function the same as the other token endpoints and adds unit tests
Closes#27455
> The mechanism responsible for long-term authentication (the 'remember
me' cookie) uses a weak construction technique. It will hash the user's
hashed password and the rands value; it will then call the secure cookie
code, which will encrypt the user's name with the computed hash. If one
were able to dump the database, they could extract those two values to
rebuild that cookie and impersonate a user. That vulnerability exists
from the date the dump was obtained until a user changed their password.
>
> To fix this security issue, the cookie could be created and verified
using a different technique such as the one explained at
https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies.
The PR removes the now obsolete setting `COOKIE_USERNAME`.
assert.Fail() will continue to execute the code while assert.FailNow()
not. I thought those uses of assert.Fail() should exit immediately.
PS: perhaps it's a good idea to use
[require](https://pkg.go.dev/github.com/stretchr/testify/require)
somewhere because the assert package's default behavior does not exit
when an error occurs, which makes it difficult to find the root error
reason.
- Currently in the cron tasks, the 'Previous Time' only displays the
previous time of when the cron library executes the function, but not
any of the manual executions of the task.
- Store the last run's time in memory in the Task struct and use that,
when that time is later than time that the cron library has executed
this task.
- This ensures that if an instance admin manually starts a task, there's
feedback that this task is/has been run, because the task might be run
that quick, that the status icon already has been changed to an
checkmark,
- Tasks that are executed at startup now reflect this as well, as the
time of the execution of that task on startup is now being shown as
'Previous Time'.
- Added integration tests for the API part, which is easier to test
because querying the HTML table of cron tasks is non-trivial.
- Resolves https://codeberg.org/forgejo/forgejo/issues/949
(cherry picked from commit fd34fdac1408ece6b7d9fe6a76501ed9a45d06fa)
---------
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: silverwind <me@silverwind.io>
- MySQL 5.7 support and testing is dropped
- MySQL tests now execute against 8.1, up from 5.7 and 8.0
- PostgreSQL 10 and 11 support ist dropped
- PostgreSQL tests now execute against 16, up from 15
- MSSQL 2008 support is dropped
- MSSQL tests now run against locked 2022 version
Fixes: https://github.com/go-gitea/gitea/issues/25657
Ref: https://endoflife.date/mysql
Ref: https://endoflife.date/postgresql
Ref: https://endoflife.date/mssqlserver
## ⚠️ BREAKING ⚠️
Support for MySQL 5.7, PostgreSQL 10 and 11, and MSSQL 2008 is dropped.
You are encouraged to upgrade to supported versions.
---------
Co-authored-by: techknowlogick <techknowlogick@gitea.com>
Part of #27065
This PR touches functions used in templates. As templates are not static
typed, errors are harder to find, but I hope I catch it all. I think
some tests from other persons do not hurt.
Blank Issues should be enabled if they are not explicit disabled through
the `blank_issues_enabled` field of the Issue Config. The Implementation
has currently a Bug: If you create a Issue Config file with only
`contact_links` and without a `blank_issues_enabled` field,
`blank_issues_enabled` is set to false by default.
The fix is only one line, but I decided to also improve the tests to
make sure there are no other problems with the Implementation.
This is a bugfix, so it should be backported to 1.20.
Part of #27065
This reduces the usage of `db.DefaultContext`. I think I've got enough
files for the first PR. When this is merged, I will continue working on
this.
Considering how many files this PR affect, I hope it won't take to long
to merge, so I don't end up in the merge conflict hell.
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Currently, Artifact does not have an expiration and automatic cleanup
mechanism, and this feature needs to be added. It contains the following
key points:
- [x] add global artifact retention days option in config file. Default
value is 90 days.
- [x] add cron task to clean up expired artifacts. It should run once a
day.
- [x] support custom retention period from `retention-days: 5` in
`upload-artifact@v3`.
- [x] artifacts link in actions view should be non-clickable text when
expired.
They currently throw a Internal Server Error when you use them without a
token. Now they correctly return a `token is required` error.
This is no security issue. If you use this endpoints with a token that
don't have the correct permission, you get the correct error. This is
not affected by this PR.
1. The old `prepareQueryArg` did double-unescaping of form value.
2. By the way, remove the unnecessary `ctx.Flash = ...` in
`MockContext`.
Co-authored-by: Giteabot <teabot@gitea.io>
Just like `models/unittest`, the testing helper functions should be in a
separate package: `contexttest`
And complete the TODO:
> // TODO: move this function to other packages, because it depends on
"models" package
This PR implements a proposal to clean up the admin users table by
moving some information out to a separate user details page (which also
displays some additional information).
Other changes:
- move edit user page from `/admin/users/{id}` to
`/admin/users/{id}/edit` -> `/admin/users/{id}` now shows the user
details page
- show if user is instance administrator as a label instead of a
separate column
- separate explore users template into a page- and a shared one, to make
it possible to use it on the user details page
- fix issue where there was no margin between alert message and
following content on admin pages
<details>
<summary>Screenshots</summary>
![grafik](https://github.com/go-gitea/gitea/assets/47871822/1ad57ac9-f20a-45a4-8477-ffe572a41e9e)
![grafik](https://github.com/go-gitea/gitea/assets/47871822/25786ecd-cb9d-4c92-90f4-e7f4292c073b)
</details>
Partially resolves#25939
---------
Co-authored-by: Giteabot <teabot@gitea.io>
- Resolves https://codeberg.org/forgejo/forgejo/issues/580
- Return a `upload_field` to any release API response, which points to
the API URL for uploading new assets.
- Adds unit test.
- Adds integration testing to verify URL is returned correctly and that
upload endpoint actually works
---------
Co-authored-by: Gusted <postmaster@gusted.xyz>
Fixes: #26333.
Previously, this endpoint only updates the `StatusCheckContexts` field
when `EnableStatusCheck==true`, which makes it impossible to clear the
array otherwise.
This patch uses slice `nil`-ness to decide whether to update the list of
checks. The field is ignored when either the client explicitly passes in
a null, or just omits the field from the json ([which causes
`json.Unmarshal` to leave the struct field
unchanged](https://go.dev/play/p/Z2XHOILuB1Q)). I think this is a better
measure of intent than whether the `EnableStatusCheck` flag was set,
because it matches the semantics of other field types.
Also adds a test case. I noticed that [`testAPIEditBranchProtection`
only checks the branch
name](c1c83dbaec/tests/integration/api_branch_test.go (L68))
and no other fields, so I added some extra `GET` calls and specific
checks to make sure the fields are changing properly.
I added those checks the existing integration test; is that the right
place for it?
Fixes#25564Fixes#23191
- Api v2 search endpoint should return only the latest version matching
the query
- Api v3 search endpoint should return `take` packages not package
versions
I kept sending pull requests that consisted of one-line changes. It's
time to
settle this once and for all. (Maybe.)
- Explain Gitea behavior and the consequences of each
setting better, so that the user does not have to consult
the docs.
- Do not use different spellings of identical terms
interchangeably, e.g. `e-mail` and `email`.
- Use more conventional terms to describe the same things,
e.g. `Confirm Password` instead of `Re-Type Password`.
- Introduces additional clarification for Mirror Settings
- Small adjustments in test
- This is a cry for help.
- Grammar and spelling consistencies for en-US locale
(e.g. cancelled -> canceled)
- Introduce tooltip improvements.
---------
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Giteabot <teabot@gitea.io>
- The permalink and 'Reference in New issue' URL of an renderable file
(those where you can see the source and a rendered version of it, such
as markdown) doesn't contain `?display=source`. This leads the issue
that the URL doesn't have any effect, as by default the rendered version
is shown and thus not the source.
- Add `?display=source` to the permalink URL and to 'Reference in New
Issue' if it's renderable file.
- Add integration testing.
Refs: https://codeberg.org/forgejo/forgejo/pulls/1088
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-authored-by: Giteabot <teabot@gitea.io>
Until now expired package data gets deleted daily by a cronjob. The
admin page shows the size of all packages and the size of unreferenced
data. The users (#25035, #20631) expect the deletion of this data if
they run the cronjob from the admin page but the job only deletes data
older than 24h.
This PR adds a new button which deletes all expired data.
![grafik](https://github.com/go-gitea/gitea/assets/1666336/b3e35d73-9496-4fa7-a20c-e5d30b1f6850)
---------
Co-authored-by: silverwind <me@silverwind.io>
Follow #25229
Copy from
https://github.com/go-gitea/gitea/pull/26290#issuecomment-1663135186
The bug is that we cannot get changed files for the
`pull_request_target` event. This event runs in the context of the base
branch, so we won't get any changes if we call
`GetFilesChangedSinceCommit` with `PullRequest.Base.Ref`.
- `setting.UI.Notification.EventSourceUpdateTime` is by default 10
seconds, which adds an 10 second delay before the test succeeds.
- Lower the interval to reduce it to at most 3 second delay (the code
only send events when they are at least 2 seconds old).
(cherry picked from commit 3adb9ae6009ff3ddebaed4875e086343f668ef7b)
Refs: https://codeberg.org/forgejo/forgejo/pulls/1166
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-authored-by: Giteabot <teabot@gitea.io>
Not too important, but I think that it'd be a pretty neat touch.
Also fixes some layout bugs introduced by a previous PR.
---------
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-authored-by: Caesar Schinas <caesar@caesarschinas.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fix#24662.
Replace #24822 and #25708 (although it has been merged)
## Background
In the past, Gitea supported issue searching with a keyword and
conditions in a less efficient way. It worked by searching for issues
with the keyword and obtaining limited IDs (as it is heavy to get all)
on the indexer (bleve/elasticsearch/meilisearch), and then querying with
conditions on the database to find a subset of the found IDs. This is
why the results could be incomplete.
To solve this issue, we need to store all fields that could be used as
conditions in the indexer and support both keyword and additional
conditions when searching with the indexer.
## Major changes
- Redefine `IndexerData` to include all fields that could be used as
filter conditions.
- Refactor `Search(ctx context.Context, kw string, repoIDs []int64,
limit, start int, state string)` to `Search(ctx context.Context, options
*SearchOptions)`, so it supports more conditions now.
- Change the data type stored in `issueIndexerQueue`. Use
`IndexerMetadata` instead of `IndexerData` in case the data has been
updated while it is in the queue. This also reduces the storage size of
the queue.
- Enhance searching with Bleve/Elasticsearch/Meilisearch, make them
fully support `SearchOptions`. Also, update the data versions.
- Keep most logic of database indexer, but remove
`issues.SearchIssueIDsByKeyword` in `models` to avoid confusion where is
the entry point to search issues.
- Start a Meilisearch instance to test it in unit tests.
- Add unit tests with almost full coverage to test
Bleve/Elasticsearch/Meilisearch indexer.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
The API should only return the real Mail of a User, if the caller is
logged in. The check do to this don't work. This PR fixes this. This not
really a security issue, but can lead to Spam.
---------
Co-authored-by: silverwind <me@silverwind.io>
Fix#25934
Add `ignoreGlobal` parameter to `reqUnitAccess` and only check global
disabled units when `ignoreGlobal` is true. So the org-level projects
and user-level projects won't be affected by global disabled
`repo.projects` unit.
The setting `MAILER_TYPE` is deprecated.
According to the config cheat sheet, it should be `PROTOCOL`.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
The version listed in rpm repodata should only contain the rpm version
(1.0.0) and not the combination of version and release (1.0.0-2). We
correct this behaviour in primary.xml.gz, filelists.xml.gz and
others.xml.gz.
Signed-off-by: Peter Verraedt <peter@verraedt.be>
Replace #25892
Close #21942
Close #25464
Major changes:
1. Serve "robots.txt" and ".well-known/security.txt" in the "public"
custom path
* All files in "public/.well-known" can be served, just like
"public/assets"
3. Add a test for ".well-known/security.txt"
4. Simplify the "FileHandlerFunc" logic, now the paths are consistent so
the code can be simpler
5. Add CORS header for ".well-known" endpoints
6. Add logs to tell users they should move some of their legacy custom
public files
```
2023/07/19 13:00:37 cmd/web.go:178:serveInstalled() [E] Found legacy public asset "img" in CustomPath. Please move it to /work/gitea/custom/public/assets/img
2023/07/19 13:00:37 cmd/web.go:182:serveInstalled() [E] Found legacy public asset "robots.txt" in CustomPath. Please move it to /work/gitea/custom/public/robots.txt
```
This PR is not breaking.
---------
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Giteabot <teabot@gitea.io>
Replace #10912
And there are many new tests to cover the CLI behavior
There were some concerns about the "option order in hook scripts"
(https://github.com/go-gitea/gitea/pull/10912#issuecomment-1137543314),
it's not a problem now. Because the hook script uses `/gitea hook
--config=/app.ini pre-receive` format. The "config" is a global option,
it can appear anywhere.
----
## ⚠️ BREAKING ⚠️
This PR does it best to avoid breaking anything. The major changes are:
* `gitea` itself won't accept web's options: `--install-port` / `--pid`
/ `--port` / `--quiet` / `--verbose` .... They are `web` sub-command's
options.
* Use `./gitea web --pid ....` instead
* `./gitea` can still run the `web` sub-command as shorthand, with
default options
* The sub-command's options must follow the sub-command
* Before: `./gitea --sub-opt subcmd` might equal to `./gitea subcmd
--sub-opt` (well, might not ...)
* After: only `./gitea subcmd --sub-opt` could be used
* The global options like `--config` are not affected
Fix#25776. Close#25826.
In the discussion of #25776, @wolfogre's suggestion was to remove the
commit status of `running` and `warning` to keep it consistent with
github.
references:
-
https://docs.github.com/en/rest/commits/statuses?apiVersion=2022-11-28#about-commit-statuses
## ⚠️ BREAKING ⚠️
So the commit status of Gitea will be consistent with GitHub, only
`pending`, `success`, `error` and `failure`, while `warning` and
`running` are not supported anymore.
---------
Co-authored-by: Jason Song <i@wolfogre.com>
current actions artifacts implementation only support single file
artifact. To support multiple files uploading, it needs:
- save each file to each db record with same run-id, same artifact-name
and proper artifact-path
- need change artifact uploading url without artifact-id, multiple files
creates multiple artifact-ids
- support `path` in download-artifact action. artifact should download
to `{path}/{artifact-path}`.
- in repo action view, it provides zip download link in artifacts list
in summary page, no matter this artifact contains single or multiple
files.
Before: the concept "Content string" is used everywhere. It has some
problems:
1. Sometimes it means "base64 encoded content", sometimes it means "raw
binary content"
2. It doesn't work with large files, eg: uploading a 1G LFS file would
make Gitea process OOM
This PR does the refactoring: use "ContentReader" / "ContentBase64"
instead of "Content"
This PR is not breaking because the key in API JSON is still "content":
`` ContentBase64 string `json:"content"` ``
we refactored `userIDFromToken` for the token parsing part into a new
function `parseToken`. `parseToken` returns the string `token` from
request, and a boolean `ok` representing whether the token exists or
not. So we can distinguish between token non-existence and token
inconsistency in the `verfity` function, thus solving the problem of no
proper error message when the token is inconsistent.
close#24439
related #22119
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: Giteabot <teabot@gitea.io>
Fixes (?) #25538
Fixes https://codeberg.org/forgejo/forgejo/issues/972
Regression #23879#23879 introduced a change which prevents read access to packages if a
user is not a member of an organization.
That PR also contained a change which disallows package access if the
team unit is configured with "no access" for packages. I don't think
this change makes sense (at the moment). It may be relevant for private
orgs. But for public or limited orgs that's useless because an
unauthorized user would have more access rights than the team member.
This PR restores the old behaviour "If a user has read access for an
owner, they can read packages".
---------
Co-authored-by: Giteabot <teabot@gitea.io>
related #16865
This PR adds an accessibility check before mounting container blobs.
---------
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: silverwind <me@silverwind.io>
This PR will display a pull request creation hint on the repository home
page when there are newly created branches with no pull request. Only
the recent 6 hours and 2 updated branches will be displayed.
Inspired by #14003
Replace #14003Resolves#311Resolves#13196Resolves#23743
co-authored by @kolaente
Follow #25229
At present, when the trigger event is `pull_request_target`, the `ref`
and `sha` of `ActionRun` are set according to the base branch of the
pull request. This makes it impossible for us to find the head branch of
the `ActionRun` directly. In this PR, the `ref` and `sha` will always be
set to the head branch and they will be changed to the base branch when
generating the task context.
Fixes#24723
Direct serving of content aka HTTP redirect is not mentioned in any of
the package registry specs but lots of official registries do that so it
should be supported by the usual clients.
Fix#25558
Extract from #22743
This PR added a repository's check when creating/deleting branches via
API. Mirror repository and archive repository cannot do that.
This adds an API for uploading and Deleting Avatars for of Users, Repos
and Organisations. I'm not sure, if this should also be added to the
Admin API.
Resolves#25344
---------
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Giteabot <teabot@gitea.io>
Related #14180
Related #25233
Related #22639Close#19786
Related #12763
This PR will change all the branches retrieve method from reading git
data to read database to reduce git read operations.
- [x] Sync git branches information into database when push git data
- [x] Create a new table `Branch`, merge some columns of `DeletedBranch`
into `Branch` table and drop the table `DeletedBranch`.
- [x] Read `Branch` table when visit `code` -> `branch` page
- [x] Read `Branch` table when list branch names in `code` page dropdown
- [x] Read `Branch` table when list git ref compare page
- [x] Provide a button in admin page to manually sync all branches.
- [x] Sync branches if repository is not empty but database branches are
empty when visiting pages with branches list
- [x] Use `commit_time desc` as the default FindBranch order by to keep
consistent as before and deleted branches will be always at the end.
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Fix#25088
This PR adds the support for
[`pull_request_target`](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target)
workflow trigger. `pull_request_target` is similar to `pull_request`,
but the workflow triggered by the `pull_request_target` event runs in
the context of the base branch of the pull request rather than the head
branch. Since the workflow from the base is considered trusted, it can
access the secrets and doesn't need approvals to run.
# The problem
There were many "path tricks":
* By default, Gitea uses its program directory as its work path
* Gitea tries to use the "work path" to guess its "custom path" and
"custom conf (app.ini)"
* Users might want to use other directories as work path
* The non-default work path should be passed to Gitea by GITEA_WORK_DIR
or "--work-path"
* But some Gitea processes are started without these values
* The "serv" process started by OpenSSH server
* The CLI sub-commands started by site admin
* The paths are guessed by SetCustomPathAndConf again and again
* The default values of "work path / custom path / custom conf" can be
changed when compiling
# The solution
* Use `InitWorkPathAndCommonConfig` to handle these path tricks, and use
test code to cover its behaviors.
* When Gitea's web server runs, write the WORK_PATH to "app.ini", this
value must be the most correct one, because if this value is not right,
users would find that the web UI doesn't work and then they should be
able to fix it.
* Then all other sub-commands can use the WORK_PATH in app.ini to
initialize their paths.
* By the way, when Gitea starts for git protocol, it shouldn't output
any log, otherwise the git protocol gets broken and client blocks
forever.
The "work path" priority is: WORK_PATH in app.ini > cmd arg --work-path
> env var GITEA_WORK_DIR > builtin default
The "app.ini" searching order is: cmd arg --config > cmd arg "work path
/ custom path" > env var "work path / custom path" > builtin default
## ⚠️ BREAKING
If your instance's "work path / custom path / custom conf" doesn't meet
the requirements (eg: work path must be absolute), Gitea will report a
fatal error and exit. You need to set these values according to the
error log.
----
Close#24818Close#24222Close#21606Close#21498Close#25107Close#24981
Maybe close#24503
Replace #23301
Replace #22754
And maybe more
### Summary
Extend the template variable substitution to replace file paths. This
can be helpful for setting up log files & directories that should match
the repository name.
### PR Changes
- Move files matching glob pattern when setting up repos from template
- For security, added ~escaping~ sanitization for cross-platform support
and to prevent directory traversal (thanks @silverwind for the
reference)
- Added unit testing for escaping function
- Fixed the integration tests for repo template generation by passing
the repo_template_id
- Updated the integration testfiles to add some variable substitution &
assert the outputs
I had to fix the existing repo template integration test and extend it
to add a check for variable substitutions.
Example:
![image](https://github.com/go-gitea/gitea/assets/12700993/621feb09-0ef3-460e-afa8-da74cd84fa4e)
Fix#21072
![image](https://github.com/go-gitea/gitea/assets/15528715/96b30beb-7f88-4a60-baae-2e5ad8049555)
Username Attribute is not a required item when creating an
authentication source. If Username Attribute is empty, the username
value of LDAP user cannot be read, so all users from LDAP will be marked
as inactive by mistake when synchronizing external users.
This PR improves the sync logic, if username is empty, the email address
will be used to find user.
1. The "web" package shouldn't depends on "modules/context" package,
instead, let each "web context" register themselves to the "web"
package.
2. The old Init/Free doesn't make sense, so simplify it
* The ctx in "Init(ctx)" is never used, and shouldn't be used that way
* The "Free" is never called and shouldn't be called because the SSPI
instance is shared
---------
Co-authored-by: Giteabot <teabot@gitea.io>
Follow up #22405Fix#20703
This PR rewrites storage configuration read sequences with some breaks
and tests. It becomes more strict than before and also fixed some
inherit problems.
- Move storage's MinioConfig struct into setting, so after the
configuration loading, the values will be stored into the struct but not
still on some section.
- All storages configurations should be stored on one section,
configuration items cannot be overrided by multiple sections. The
prioioty of configuration is `[attachment]` > `[storage.attachments]` |
`[storage.customized]` > `[storage]` > `default`
- For extra override configuration items, currently are `SERVE_DIRECT`,
`MINIO_BASE_PATH`, `MINIO_BUCKET`, which could be configured in another
section. The prioioty of the override configuration is `[attachment]` >
`[storage.attachments]` > `default`.
- Add more tests for storages configurations.
- Update the storage documentations.
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Clarify the "link-action" behavior:
> // A "link-action" can post AJAX request to its "data-url"
> // Then the browser is redirect to: the "redirect" in response, or
"data-redirect" attribute, or current URL by reloading.
And enhance the "link-action" to support showing a modal dialog for
confirm. A similar general approach could also help PRs like
https://github.com/go-gitea/gitea/pull/22344#discussion_r1062883436
> // If the "link-action" has "data-modal-confirm(-html)" attribute, a
confirm modal dialog will be shown before taking action.
And a lot of duplicate code can be removed now. A good framework design
can help to avoid code copying&pasting.
---------
Co-authored-by: silverwind <me@silverwind.io>
## Changes
- Adds the following high level access scopes, each with `read` and
`write` levels:
- `activitypub`
- `admin` (hidden if user is not a site admin)
- `misc`
- `notification`
- `organization`
- `package`
- `issue`
- `repository`
- `user`
- Adds new middleware function `tokenRequiresScopes()` in addition to
`reqToken()`
- `tokenRequiresScopes()` is used for each high-level api section
- _if_ a scoped token is present, checks that the required scope is
included based on the section and HTTP method
- `reqToken()` is used for individual routes
- checks that required authentication is present (but does not check
scope levels as this will already have been handled by
`tokenRequiresScopes()`
- Adds migration to convert old scoped access tokens to the new set of
scopes
- Updates the user interface for scope selection
### User interface example
<img width="903" alt="Screen Shot 2023-05-31 at 1 56 55 PM"
src="https://github.com/go-gitea/gitea/assets/23248839/654766ec-2143-4f59-9037-3b51600e32f3">
<img width="917" alt="Screen Shot 2023-05-31 at 1 56 43 PM"
src="https://github.com/go-gitea/gitea/assets/23248839/1ad64081-012c-4a73-b393-66b30352654c">
## tokenRequiresScopes Design Decision
- `tokenRequiresScopes()` was added to more reliably cover api routes.
For an incoming request, this function uses the given scope category
(say `AccessTokenScopeCategoryOrganization`) and the HTTP method (say
`DELETE`) and verifies that any scoped tokens in use include
`delete:organization`.
- `reqToken()` is used to enforce auth for individual routes that
require it. If a scoped token is not present for a request,
`tokenRequiresScopes()` will not return an error
## TODO
- [x] Alphabetize scope categories
- [x] Change 'public repos only' to a radio button (private vs public).
Also expand this to organizations
- [X] Disable token creation if no scopes selected. Alternatively, show
warning
- [x] `reqToken()` is missing from many `POST/DELETE` routes in the api.
`tokenRequiresScopes()` only checks that a given token has the correct
scope, `reqToken()` must be used to check that a token (or some other
auth) is present.
- _This should be addressed in this PR_
- [x] The migration should be reviewed very carefully in order to
minimize access changes to existing user tokens.
- _This should be addressed in this PR_
- [x] Link to api to swagger documentation, clarify what
read/write/delete levels correspond to
- [x] Review cases where more than one scope is needed as this directly
deviates from the api definition.
- _This should be addressed in this PR_
- For example:
```go
m.Group("/users/{username}/orgs", func() {
m.Get("", reqToken(), org.ListUserOrgs)
m.Get("/{org}/permissions", reqToken(), org.GetUserOrgsPermissions)
}, tokenRequiresScopes(auth_model.AccessTokenScopeCategoryUser,
auth_model.AccessTokenScopeCategoryOrganization),
context_service.UserAssignmentAPI())
```
## Future improvements
- [ ] Add required scopes to swagger documentation
- [ ] Redesign `reqToken()` to be opt-out rather than opt-in
- [ ] Subdivide scopes like `repository`
- [ ] Once a token is created, if it has no scopes, we should display
text instead of an empty bullet point
- [ ] If the 'public repos only' option is selected, should read
categories be selected by default
Closes#24501Closes#24799
Co-authored-by: Jonathan Tran <jon@allspice.io>
Co-authored-by: Kyle D <kdumontnu@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
The PKCE flow according to [RFC
7636](https://datatracker.ietf.org/doc/html/rfc7636) allows for secure
authorization without the requirement to provide a client secret for the
OAuth app.
It is implemented in Gitea since #5378 (v1.8.0), however without being
able to omit client secret.
Since #21316 Gitea supports setting client type at OAuth app
registration.
As public clients are already forced to use PKCE since #21316, in this
PR the client secret check is being skipped if a public client is
detected. As Gitea seems to implement PKCE authorization correctly
according to the spec, this would allow for PKCE flow without providing
a client secret.
Also add some docs for it, please check language as I'm not a native
English speaker.
Closes#17107Closes#25047
The admin config page has been broken for many many times, a little
refactoring would make this page panic.
So, add a test for it, and add another test to cover the 500 error page.
Co-authored-by: Giteabot <teabot@gitea.io>
This PR creates an API endpoint for creating/updating/deleting multiple
files in one API call similar to the solution provided by
[GitLab](https://docs.gitlab.com/ee/api/commits.html#create-a-commit-with-multiple-files-and-actions).
To archive this, the CreateOrUpdateRepoFile and DeleteRepoFIle functions
in files service are unified into one function supporting multiple files
and actions.
Resolves#14619
This adds the ability to pin important Issues and Pull Requests. You can
also move pinned Issues around to change their Position. Resolves#2175.
## Screenshots
![grafik](https://user-images.githubusercontent.com/15185051/235123207-0aa39869-bb48-45c3-abe2-ba1e836046ec.png)
![grafik](https://user-images.githubusercontent.com/15185051/235123297-152a16ea-a857-451d-9a42-61f2cd54dd75.png)
![grafik](https://user-images.githubusercontent.com/15185051/235640782-cbfe25ec-6254-479a-a3de-133e585d7a2d.png)
The Design was mostly copied from the Projects Board.
## Implementation
This uses a new `pin_order` Column in the `issue` table. If the value is
set to 0, the Issue is not pinned. If it's set to a bigger value, the
value is the Position. 1 means it's the first pinned Issue, 2 means it's
the second one etc. This is dived into Issues and Pull requests for each
Repo.
## TODO
- [x] You can currently pin as many Issues as you want. Maybe we should
add a Limit, which is configurable. GitHub uses 3, but I prefer 6, as
this is better for bigger Projects, but I'm open for suggestions.
- [x] Pin and Unpin events need to be added to the Issue history.
- [x] Tests
- [x] Migration
**The feature itself is currently fully working, so tester who may find
weird edge cases are very welcome!**
---------
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Giteabot <teabot@gitea.io>
## ⚠️ Breaking
The `log.<mode>.<logger>` style config has been dropped. If you used it,
please check the new config manual & app.example.ini to make your
instance output logs as expected.
Although many legacy options still work, it's encouraged to upgrade to
the new options.
The SMTP logger is deleted because SMTP is not suitable to collect logs.
If you have manually configured Gitea log options, please confirm the
logger system works as expected after upgrading.
## Description
Close#12082 and maybe more log-related issues, resolve some related
FIXMEs in old code (which seems unfixable before)
Just like rewriting queue #24505 : make code maintainable, clear legacy
bugs, and add the ability to support more writers (eg: JSON, structured
log)
There is a new document (with examples): `logging-config.en-us.md`
This PR is safer than the queue rewriting, because it's just for
logging, it won't break other logic.
## The old problems
The logging system is quite old and difficult to maintain:
* Unclear concepts: Logger, NamedLogger, MultiChannelledLogger,
SubLogger, EventLogger, WriterLogger etc
* Some code is diffuclt to konw whether it is right:
`log.DelNamedLogger("console")` vs `log.DelNamedLogger(log.DEFAULT)` vs
`log.DelLogger("console")`
* The old system heavily depends on ini config system, it's difficult to
create new logger for different purpose, and it's very fragile.
* The "color" trick is difficult to use and read, many colors are
unnecessary, and in the future structured log could help
* It's difficult to add other log formats, eg: JSON format
* The log outputer doesn't have full control of its goroutine, it's
difficult to make outputer have advanced behaviors
* The logs could be lost in some cases: eg: no Fatal error when using
CLI.
* Config options are passed by JSON, which is quite fragile.
* INI package makes the KEY in `[log]` section visible in `[log.sub1]`
and `[log.sub1.subA]`, this behavior is quite fragile and would cause
more unclear problems, and there is no strong requirement to support
`log.<mode>.<logger>` syntax.
## The new design
See `logger.go` for documents.
## Screenshot
<details>
![image](https://github.com/go-gitea/gitea/assets/2114189/4462d713-ba39-41f5-bb08-de912e67e1ff)
![image](https://github.com/go-gitea/gitea/assets/2114189/b188035e-f691-428b-8b2d-ff7b2199b2f9)
![image](https://github.com/go-gitea/gitea/assets/2114189/132e9745-1c3b-4e00-9e0d-15eaea495dee)
</details>
## TODO
* [x] add some new tests
* [x] fix some tests
* [x] test some sub-commands (manually ....)
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Giteabot <teabot@gitea.io>
This PR is a refactor at the beginning. And now it did 4 things.
- [x] Move renaming organizaiton and user logics into services layer and
merged as one function
- [x] Support rename a user capitalization only. For example, rename the
user from `Lunny` to `lunny`. We just need to change one table `user`
and others should not be touched.
- [x] Before this PR, some renaming were missed like `agit`
- [x] Fix bug the API reutrned from `http.StatusNoContent` to `http.StatusOK`
fix#12192 Support SSH for go get
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Giteabot <teabot@gitea.io>
Co-authored-by: mfk <mfk@hengwei.com.cn>
Co-authored-by: silverwind <me@silverwind.io>
Fixes#24145
To solve the bug, I added a "computed" `TargetBehind` field to the
`Release` model, which indicates the target branch of a release.
This is particularly useful if the target branch was deleted in the
meantime (or is empty).
I also did a micro-optimization in `calReleaseNumCommitsBehind`. Instead
of checking that a branch exists and then call `GetBranchCommit`, I
immediately call `GetBranchCommit` and handle the `git.ErrNotExist`
error.
This optimization is covered by the added unit test.
The `GetAllCommits` endpoint can be pretty slow, especially in repos
with a lot of commits. The issue is that it spends a lot of time
calculating information that may not be useful/needed by the user.
The `stat` param was previously added in #21337 to address this, by
allowing the user to disable the calculating stats for each commit. But
this has two issues:
1. The name `stat` is rather misleading, because disabling `stat`
disables the Stat **and** Files. This should be separated out into two
different params, because getting a list of affected files is much less
expensive than calculating the stats
2. There's still other costly information provided that the user may not
need, such as `Verification`
This PR, adds two parameters to the endpoint, `files` and `verification`
to allow the user to explicitly disable this information when listing
commits. The default behavior is true.
This fixes up https://github.com/go-gitea/gitea/pull/10446; in that, the
*expected* timezone was changed to the local timezone, but the computed
timezone was left in UTC.
The result was this failure, when run on a non-UTC system:
```
Diff:
--- Expected
+++ Actual
@@ -5,3 +5,3 @@
commitMsg: (string) (len=12) "init project",
- commitTime: (string) (len=29) "Wed, 14 Jun 2017 09:54:21 EDT"
+ commitTime: (string) (len=29) "Wed, 14 Jun 2017 13:54:21 UTC"
},
@@ -11,3 +11,3 @@
commitMsg: (string) (len=12) "init project",
- commitTime: (string) (len=29) "Wed, 14 Jun 2017 09:54:21 EDT"
+ commitTime: (string) (len=29) "Wed, 14 Jun 2017 13:54:21 UTC"
}
Test: TestViewRepo2
```
I assume this was probably missed since the CI servers all run in UTC?
The Format() string "Mon, 02 Jan 2006 15:04:05 UTC" was incorrect: 'UTC'
isn't recognized as a variable placeholder, but was just being copied
verbatim. It should use 'MST' in order to command Format() to output the
attached timezone, which is what `time.RFC1123` has.
# ⚠️ Breaking
Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).
If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.
Example:
```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```
Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.
# The problem
The old queue package has some legacy problems:
* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.
It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.
# The new queue package
It keeps using old config and concept as much as possible.
* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.
There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.
Almost ready for review.
TODO:
* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)
## Code coverage:
![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)
Due to #24409 , we can now specify '--not' when getting all commits from
a repo to exclude commits from a different branch.
When I wrote that PR, I forgot to also update the code that counts the
number of commits in the repo. So now, if the --not option is used, it
may return too many commits, which can indicate that another page of
data is available when it is not.
This PR passes --not to the commands that count the number of commits in
a repo
This gives more "freshness" to the explore page. So it's not just the
same X users on the explore page by default, now it matches the same
sort as the repos on the explore page.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This pull request adds an integration test to validate the behavior of
raw content API's reference handling for all supported formats .
close #24242
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: @awkwardbunny
This PR adds a Debian package registry. You can follow [this
tutorial](https://www.baeldung.com/linux/create-debian-package) to build
a *.deb package for testing. Source packages are not supported at the
moment and I did not find documentation of the architecture "all" and
how these packages should be treated.
---------
Co-authored-by: Brian Hong <brian@hongs.me>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Fix https://github.com/go-gitea/gitea/pull/24362/files#r1179095324
`getAuthenticatedMeta` has checked them, these code are duplicated one.
And the first invokation has a wrong permission check. `DownloadHandle`
should require read permission but not write.
> The scoped token PR just checked all API routes but in fact, some web
routes like `LFS`, git `HTTP`, container, and attachments supports basic
auth. This PR added scoped token check for them.
---------
Signed-off-by: jolheiser <john.olheiser@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This adds a API for getting License templates. This tries to be as close
to the [GitHub
API](https://docs.github.com/en/rest/licenses?apiVersion=2022-11-28) as
possible, but Gitea does not support all features that GitHub has. I
think they should been added, but this out f the scope of this PR. You
should merge #23006 before this PR for security reasons.
Follow #22719
### Major changes
1. `ServerError` doesn't do format, so remove the `%s`
2. Simplify `RenderBranchFeed` (slightly)
3. Remove unused `BranchFeedRSS`
4. Make `feed.RenderBranchFeed` respect `EnableFeed` config
5. Make `RepoBranchTagSelector.vue` respect `EnableFeed` setting,
otherwise there is always RSS icon
6. The `(branchURLPrefix + item.url).replace('src', 'rss')` doesn't seem
right for all cases, for example, the string `src` could appear in
`branchURLPrefix`, so we need a separate `rssURLPrefix`
7. The `<a>` in Vue menu needs `@click.stop`, otherwise the menu itself
would be triggered at the same time
8. Change `<a><button></button></a>` to `<a role=button>`
9. Use `{{PathEscapeSegments .TreePath}}` instead of `{{range $i, $v :=
.TreeNames}}/{{$v}}{{end}}`
Screenshot of changed parts:
<details>
![image](https://user-images.githubusercontent.com/2114189/234315538-66603694-9093-48a8-af33-83575fd7a018.png)
![image](https://user-images.githubusercontent.com/2114189/234315786-f1efa60b-012e-490b-8ce2-d448dc6fe5c9.png)
![image](https://user-images.githubusercontent.com/2114189/234334941-446941bc-1baa-4256-8850-ccc439476cda.png)
</details>
### Other thoughts
Should we remove the RSS icon from the branch dropdown list? It seems
too complex for a list UI, and users already have the chance to get the
RSS feed URL from "branches" page.
---------
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: silverwind <me@silverwind.io>
Enable [forbidigo](https://github.com/ashanbrown/forbidigo) linter which
forbids print statements. Will check how to integrate this with the
smallest impact possible, so a few `nolint` comments will likely be
required. Plan is to just go through the issues and either:
- Remove the print if it is nonsensical
- Add a `//nolint` directive if it makes sense
I don't plan on investigating the individual issues any further.
<details>
<summary>Initial Lint Results</summary>
```
modules/log/event.go:348:6: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(err)
^
modules/log/event.go:382:6: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(err)
^
modules/queue/unique_queue_disk_channel_test.go:20:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("TempDir %s\n", tmpDir)
^
contrib/backport/backport.go:168:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Backporting %s to %s as %s\n", pr, localReleaseBranch, backportBranch)
^
contrib/backport/backport.go:216:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Navigate to %s to open PR\n", url)
^
contrib/backport/backport.go:223:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `xdg-open %s`\n", url)
^
contrib/backport/backport.go:233:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `git push -u %s %s`\n", remote, backportBranch)
^
contrib/backport/backport.go:243:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Amending commit to prepend `Backport #%s` to body\n", pr)
^
contrib/backport/backport.go:272:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("* Attempting git cherry-pick --continue")
^
contrib/backport/backport.go:281:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Attempting git cherry-pick %s\n", sha)
^
contrib/backport/backport.go:297:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Current branch is %s\n", currentBranch)
^
contrib/backport/backport.go:299:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Current branch is %s - not checking out\n", currentBranch)
^
contrib/backport/backport.go:304:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Branch %s already exists. Checking it out...\n", backportBranch)
^
contrib/backport/backport.go:308:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `git checkout -b %s %s`\n", backportBranch, releaseBranch)
^
contrib/backport/backport.go:313:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `git fetch %s main`\n", remote)
^
contrib/backport/backport.go:316:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(string(out))
^
contrib/backport/backport.go:319:2: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(string(out))
^
contrib/backport/backport.go:321:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `git fetch %s %s`\n", remote, releaseBranch)
^
contrib/backport/backport.go:324:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(string(out))
^
contrib/backport/backport.go:327:2: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(string(out))
^
models/unittest/fixtures.go:50:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Unsupported RDBMS for integration tests")
^
models/unittest/fixtures.go:89:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("LoadFixtures failed after retries: %v\n", err)
^
models/unittest/fixtures.go:110:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Failed to generate sequence update: %v\n", err)
^
models/unittest/fixtures.go:117:6: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Failed to update sequence: %s Error: %v\n", value, err)
^
models/migrations/base/tests.go:118:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Environment variable $GITEA_ROOT not set")
^
models/migrations/base/tests.go:127:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Could not find gitea binary at %s\n", setting.AppPath)
^
models/migrations/base/tests.go:134:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Environment variable $GITEA_CONF not set - defaulting to %s\n", giteaConf)
^
models/migrations/base/tests.go:145:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Unable to create temporary data path %v\n", err)
^
models/migrations/base/tests.go:154:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Unable to InitFull: %v\n", err)
^
models/migrations/v1_11/v112.go:34:5: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Error: %v", err)
^
contrib/fixtures/fixture_generation.go:36:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("CreateTestEngine: %+v", err)
^
contrib/fixtures/fixture_generation.go:40:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("PrepareTestDatabase: %+v\n", err)
^
contrib/fixtures/fixture_generation.go:46:5: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("generate '%s': %+v\n", r, err)
^
contrib/fixtures/fixture_generation.go:53:5: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("generate '%s': %+v\n", g.name, err)
^
contrib/fixtures/fixture_generation.go:71:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s created.\n", path)
^
services/gitdiff/gitdiff_test.go:543:2: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println(result)
^
services/gitdiff/gitdiff_test.go:560:2: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println(result)
^
services/gitdiff/gitdiff_test.go:577:2: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println(result)
^
modules/web/routing/logger_manager.go:34:2: use of `print` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
print Printer
^
modules/doctor/paths.go:109:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Warning: can't remove temporary file: '%s'\n", tmpFile.Name())
^
tests/test_utils.go:33:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf(format+"\n", args...)
^
tests/test_utils.go:61:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Environment variable $GITEA_CONF not set, use default: %s\n", giteaConf)
^
cmd/actions.go:54:9: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
_, _ = fmt.Printf("%s\n", respText)
^
cmd/admin_user_change_password.go:74:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s's password has been successfully updated!\n", user.Name)
^
cmd/admin_user_create.go:109:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("generated random password is '%s'\n", password)
^
cmd/admin_user_create.go:164:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Access token was successfully created... %s\n", t.Token)
^
cmd/admin_user_create.go:167:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("New user '%s' has been successfully created!\n", username)
^
cmd/admin_user_generate_access_token.go:74:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s\n", t.Token)
^
cmd/admin_user_generate_access_token.go:76:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Access token was successfully created: %s\n", t.Token)
^
cmd/admin_user_must_change_password.go:56:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Updated %d users setting MustChangePassword to %t\n", n, mustChangePassword)
^
cmd/convert.go:44:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Converted successfully, please confirm your database's character set is now utf8mb4")
^
cmd/convert.go:50:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Converted successfully, please confirm your database's all columns character is NVARCHAR now")
^
cmd/convert.go:52:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("This command can only be used with a MySQL or MSSQL database")
^
cmd/doctor.go:104:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(err)
^
cmd/doctor.go:105:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Check if you are using the right config file. You can use a --config directive to specify one.")
^
cmd/doctor.go:243:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(err)
^
cmd/embedded.go:154:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(a.path)
^
cmd/embedded.go:198:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Using app.ini at", setting.CustomConf)
^
cmd/embedded.go:217:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Extracting to %s:\n", destdir)
^
cmd/embedded.go:253:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s already exists; skipped.\n", dest)
^
cmd/embedded.go:275:2: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(dest)
^
cmd/generate.go:63:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s", internalToken)
^
cmd/generate.go:66:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("\n")
^
cmd/generate.go:78:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s", JWTSecretBase64)
^
cmd/generate.go:81:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("\n")
^
cmd/generate.go:93:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s", secretKey)
^
cmd/generate.go:96:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("\n")
^
cmd/keys.go:74:2: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(strings.TrimSpace(authorizedString))
^
cmd/mailer.go:32:4: use of `fmt.Print` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Print("warning: Content is empty")
^
cmd/mailer.go:35:3: use of `fmt.Print` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Print("Proceed with sending email? [Y/n] ")
^
cmd/mailer.go:40:4: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("The mail was not sent")
^
cmd/mailer.go:49:9: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
_, _ = fmt.Printf("Sent %s email(s) to all users\n", respText)
^
cmd/serv.go:147:3: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("Gitea: SSH has been disabled")
^
cmd/serv.go:153:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("error showing subcommand help: %v\n", err)
^
cmd/serv.go:175:4: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("Hi there! You've successfully authenticated with the deploy key named " + key.Name + ", but Gitea does not provide shell access.")
^
cmd/serv.go:177:4: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("Hi there! You've successfully authenticated with the principal " + key.Content + ", but Gitea does not provide shell access.")
^
cmd/serv.go:179:4: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("Hi there, " + user.Name + "! You've successfully authenticated with the key named " + key.Name + ", but Gitea does not provide shell access.")
^
cmd/serv.go:181:3: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("If this is unexpected, please log in with password and setup Gitea under another user.")
^
cmd/serv.go:196:5: use of `fmt.Print` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Print(`{"type":"gitea","version":1}`)
^
tests/e2e/e2e_test.go:54:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Error initializing test database: %v\n", err)
^
tests/e2e/e2e_test.go:63:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("util.RemoveAll: %v\n", err)
^
tests/e2e/e2e_test.go:67:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Unable to remove repo indexer: %v\n", err)
^
tests/e2e/e2e_test.go:109:6: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%v", stdout.String())
^
tests/e2e/e2e_test.go:110:6: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%v", stderr.String())
^
tests/e2e/e2e_test.go:113:6: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%v", stdout.String())
^
tests/integration/integration_test.go:124:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Error initializing test database: %v\n", err)
^
tests/integration/integration_test.go:135:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("util.RemoveAll: %v\n", err)
^
tests/integration/integration_test.go:139:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Unable to remove repo indexer: %v\n", err)
^
tests/integration/repo_test.go:357:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s", resp.Body)
^
```
</details>
---------
Co-authored-by: Giteabot <teabot@gitea.io>
- Update all tool dependencies to latest tag
- Remove unused errcheck, it is part of golangci-lint
- Include main.go in air
- Enable wastedassign again now that it's
[generics-compatible](https://github.com/golangci/golangci-lint/pull/3689)
- Restructured lint targets to new `lint-*` namespace
Before, there was a `log/buffer.go`, but that design is not general, and
it introduces a lot of irrelevant `Content() (string, error) ` and
`return "", fmt.Errorf("not supported")` .
And the old `log/buffer.go` is difficult to use, developers have to
write a lot of `Contains` and `Sleep` code.
The new `LogChecker` is designed to be a general approach to help to
assert some messages appearing or not appearing in logs.
Close#24062
At the beginning, I just wanted to fix the warning mentioned by #24062
But, the cookie code really doesn't look good to me, so clean up them.
Complete the TODO on `SetCookie`:
> TODO: Copied from gitea.com/macaron/macaron and should be improved
after macaron removed.
I've heard many reports of users getting scared when they see their own
email address for their own profile, as they believe that the email
field is also visible to other users. Currently, using Incognito mode
or going over the Settings is the only "reasonable" way to verify this
from the perspective of the user.
A locked padlock should be enough to indicate that the email is not
visible to anyone apart from the user and the admins. An unlocked
padlock is used if the email address is only shown to authenticated
users.
Some additional string-related changes in the Settings were introduced
as well to ensure consistency, and the comments in the relevant tests
were improved so as to allow for easier modifications in the future.
---
#### Screenshot (EDIT: Scroll down for more up-to-date screenshots)
***Please remove this section before merging.***
![image](https://user-images.githubusercontent.com/30193966/229572425-909894aa-a7d5-4bf3-92d3-23b1921dcc90.png)
This lock should only appear if the email address is explicitly hidden
using the `Hide Email Address` setting. The change was originally tested
on top of and designed for the Forgejo fork, but I don't expect any
problems to arise from this and I don't think that a
documentation-related change is strictly necessary.
---------
Co-authored-by: silverwind <me@silverwind.io>
None of the features of `unrolled/render` package is used.
The Golang builtin "html/template" just works well. Then we can improve
our HTML render to resolve the "$.root.locale.Tr" problem as much as
possible.
Next step: we can have a template render pool (by Clone), then we can
inject global functions with dynamic context to every `Execute` calls.
Then we can use `{{Locale.Tr ....}}` directly in all templates , no need
to pass the `$.root.locale` again and again.
At first, we have one unified team unit permission which is called
`Team.Authorize` in DB.
But since https://github.com/go-gitea/gitea/pull/17811, we allowed
different units to have different permission.
The old code is only designed for the old version. So after #17811, if
org users have write permission of other units, but have no permission
of packages, they can also get write permission of packages.
Co-authored-by: delvh <dev.lh@web.de>
# Why this PR comes
At first, I'd like to help users like #23636 (there are a lot)
The unclear "Internal Server Error" is quite anonying, scare users,
frustrate contributors, nobody knows what happens.
So, it's always good to provide meaningful messages to end users (of
course, do not leak sensitive information).
When I started working on the "response message to end users", I found
that the related code has a lot of technical debt. A lot of copy&paste
code, unclear fields and usages.
So I think it's good to make everything clear.
# Tech Backgrounds
Gitea has many sub-commands, some are used by admins, some are used by
SSH servers or Git Hooks. Many sub-commands use "internal API" to
communicate with Gitea web server.
Before, Gitea server always use `StatusCode + Json "err" field` to
return messages.
* The CLI sub-commands: they expect to show all error related messages
to site admin
* The Serv/Hook sub-commands (for git clients): they could only show
safe messages to end users, the error log could only be recorded by
"SSHLog" to Gitea web server.
In the old design, it assumes that:
* If the StatusCode is 500 (in some functions), then the "err" field is
error log, shouldn't be exposed to git client.
* If the StatusCode is 40x, then the "err" field could be exposed. And
some functions always read the "err" no matter what the StatusCode is.
The old code is not strict, and it's difficult to distinguish the
messages clearly and then output them correctly.
# This PR
To help to remove duplicate code and make everything clear, this PR
introduces `ResponseExtra` and `requestJSONResp`.
* `ResponseExtra` is a struct which contains "extra" information of a
internal API response, including StatusCode, UserMsg, Error
* `requestJSONResp` is a generic function which can be used for all
cases to help to simplify the calls.
* Remove all `map["err"]`, always use `private.Response{Err}` to
construct error messages.
* User messages and error messages are separated clearly, the `fail` and
`handleCliResponseExtra` will output correct messages.
* Replace all `Internal Server Error` messages with meaningful (still
safe) messages.
This PR saves more than 300 lines, while makes the git client messages
more clear.
Many gitea-serv/git-hook related essential functions are covered by
tests.
---------
Co-authored-by: delvh <dev.lh@web.de>
Closes#20955
This PR adds the possibility to disable blank Issues, when the Repo has
templates. This can be done by creating the file
`.gitea/issue_config.yaml` with the content `blank_issues_enabled` in
the Repo.
Always respect the `setting.UI.ShowUserEmail` and `KeepEmailPrivate`
setting.
* It doesn't make sense to show user's own E-mail to themself.
* Always hide the E-mail if KeepEmailPrivate=true, then the user could
know how their profile page looks like for others.
* Revert the `setting.UI.ShowUserEmail` change from #4981 . This setting
is used to control the E-mail display, not only for the user list page.
ps: the incorrect `<div .../>` tag on the profile page has been fixed by
#23748 together, so this PR becomes simpler.
* Clean the "tools" directory. The "tools" directory contains only two
files, move them.
* The "external_renderer.go" works like "cat" command to echo Stdin to
Stdout , to help testing.
* The `// gobuild: external_renderer` is incorrect, there should be no
space: `//gobuild: external_renderer`
* The `fmt.Print(os.Args[1])` is not a well-defined behavior, and it's
never used.
* The "watch.sh" is for "make watch", it's somewhat related to "build"
* After this PR, there is no "tools" directory, the project root
directory looks slightly simpler than before.
* Remove the legacy "contrib/autoboot.sh", there is no
"gogs_supervisord.sh"
* Remove the legacy "contrib/mysql.sql", it's never mentioned anywhere.
* Remove the legacy "contrib/pr/checkout.go", it has been broken for
long time, and it introduces unnecessary dependencies of the main code
base.
Follow:
* #23574
* Remove all ".tooltip[data-content=...]"
Major changes:
* Remove "tooltip" class, use "[data-tooltip-content=...]" instead of
".tooltip[data-content=...]"
* Remove legacy `data-position`, it's dead code since last Fomantic
Tooltip -> Tippy Tooltip refactoring
* Rename reaction attribute from `data-content` to
`data-reaction-content`
* Add comments for some `data-content`: `{{/* used by the form */}}`
* Remove empty "ui" class
* Use "text color" for SVG icons (a few)
Close#23444
Add `Repository` to npm package `Metadata` struct so the `repository` in
`package.json` can be stored and be returned in the endpoint.
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
`namedBlob` turned out to be a poor imitation of a `TreeEntry`. Using
the latter directly shortens this code.
This partially undoes https://github.com/go-gitea/gitea/pull/23152/,
which I found a merge conflict with, and also expands the test it added
to cover the subtle README-in-a-subfolder case.
Support to iterator subdirectory in ObjectStorage for
ObjectStorage.Iterator method.
It's required for https://github.com/go-gitea/gitea/pull/22738 to make
artifact files cleanable.
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- This PR attempts to split our various DB tests into separate
pipelines.
- It splits up some of the extra feature-related tests rather than
having most of them in the MySQL test.
- It disables the race detector for some of the pipelines as well, as it
can cause slower runs and is mostly redundant when the pipelines just
swap DBs.
- It builds without SQLite support for any of the non-SQLite pipelines.
- It moves the e2e test to using SQLite rather than PG (partially
because I moved the minio tests to PG and that mucked up the test
config, and partially because it avoids another running service)
- It splits up the `go mod download` task in the Makefile from the tool
installation, as the tools are only needed in the compliance pipeline.
(Arguably even some of the tools aren't needed there, but that could be
a follow-up PR)
- SQLite is now the only arm64 pipeline, moving PG back to amd64 which
can leverage autoscaler
Should resolve#22010 - one thing that wasn't changed here but is
mentioned in that issue, unit tests are needed in the same pipeline as
an integration test in order to form a complete coverage report (at
least as far as I could tell), so for now it remains in a pipeline with
a DB integration test.
Please let me know if I've inadvertently changed something that was how
it was on purpose.
---
I will say sometimes it's hard to pin down the average time, as a
pipeline could be waiting for a runner for X minutes and that brings the
total up by X minutes as well, but overall this does seem to be faster
on average.
---------
Signed-off-by: jolheiser <john.olheiser@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Add test coverage to the important features of
[`routers.web.repo.renderReadmeFile`](067b0c2664/routers/web/repo/view.go (L273));
namely that:
- it can handle looking in docs/, .gitea/, and .github/
- it can handle choosing between multiple competing READMEs
- it prefers the localized README to the markdown README to the
plaintext README
- it can handle broken symlinks when processing all the options
- it uses the name of the symlink, not the name of the target of the
symlink
Replace #23350.
Refactor `setting.Database.UseMySQL` to
`setting.Database.Type.IsMySQL()`.
To avoid mismatching between `Type` and `UseXXX`.
This refactor can fix the bug mentioned in #23350, so it should be
backported.
`renderReadmeFile` needs `readmeTreelink` as parameter but gets
`treeLink`.
The values of them look like as following:
`treeLink`: `/{OwnerName}/{RepoName}/src/branch/{BranchName}`
`readmeTreelink`:
`/{OwnerName}/{RepoName}/src/branch/{BranchName}/{ReadmeFileName}`
`path.Dir` in
8540fc45b1/routers/web/repo/view.go (L316)
should convert `readmeTreelink` into
`/{OwnerName}/{RepoName}/src/branch/{BranchName}` instead of the current
`/{OwnerName}/{RepoName}/src/branch`.
Fixes#23151
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
minio/sha256-simd provides additional acceleration for SHA256 using
AVX512, SHA Extensions for x86 and ARM64 for ARM.
It provides a drop-in replacement for crypto/sha256 and if the
extensions are not available it falls back to standard crypto/sha256.
---------
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Since #22632, when a commit status has multiple checks, no check is
shown at all (hence no way to see the other checks).
This PR fixes this by always adding a tag with the
`.commit-statuses-trigger` to the DOM (the `.vm` is for vertical
alignment).
![2023-02-13-120528](https://user-images.githubusercontent.com/3864879/218441846-1a79c169-2efd-46bb-9e75-d8b45d7cc8e3.png)
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Some bugs caused by less unit tests in fundamental packages. This PR
refactor `setting` package so that create a unit test will be easier
than before.
- All `LoadFromXXX` files has been splited as two functions, one is
`InitProviderFromXXX` and `LoadCommonSettings`. The first functions will
only include the code to create or new a ini file. The second function
will load common settings.
- It also renames all functions in setting from `newXXXService` to
`loadXXXSetting` or `loadXXXFrom` to make the function name less
confusing.
- Move `XORMLog` to `SQLLog` because it's a better name for that.
Maybe we should finally move these `loadXXXSetting` into the `XXXInit`
function? Any idea?
---------
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: delvh <dev.lh@web.de>
Add a new "exclusive" option per label. This makes it so that when the
label is named `scope/name`, no other label with the same `scope/`
prefix can be set on an issue.
The scope is determined by the last occurence of `/`, so for example
`scope/alpha/name` and `scope/beta/name` are considered to be in
different scopes and can coexist.
Exclusive scopes are not enforced by any database rules, however they
are enforced when editing labels at the models level, automatically
removing any existing labels in the same scope when either attaching a
new label or replacing all labels.
In menus use a circle instead of checkbox to indicate they function as
radio buttons per scope. Issue filtering by label ensures that only a
single scoped label is selected at a time. Clicking with alt key can be
used to remove a scoped label, both when editing individual issues and
batch editing.
Label rendering refactor for consistency and code simplification:
* Labels now consistently have the same shape, emojis and tooltips
everywhere. This includes the label list and label assignment menus.
* In label list, show description below label same as label menus.
* Don't use exactly black/white text colors to look a bit nicer.
* Simplify text color computation. There is no point computing luminance
in linear color space, as this is a perceptual problem and sRGB is
closer to perceptually linear.
* Increase height of label assignment menus to show more labels. Showing
only 3-4 labels at a time leads to a lot of scrolling.
* Render all labels with a new RenderLabel template helper function.
Label creation and editing in multiline modal menu:
* Change label creation to open a modal menu like label editing.
* Change menu layout to place name, description and colors on separate
lines.
* Don't color cancel button red in label editing modal menu.
* Align text to the left in model menu for better readability and
consistent with settings layout elsewhere.
Custom exclusive scoped label rendering:
* Display scoped label prefix and suffix with slightly darker and
lighter background color respectively, and a slanted edge between them
similar to the `/` symbol.
* In menus exclusive labels are grouped with a divider line.
---------
Co-authored-by: Yarden Shoham <hrsi88@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
To avoid duplicated load of the same data in an HTTP request, we can set
a context cache to do that. i.e. Some pages may load a user from a
database with the same id in different areas on the same page. But the
code is hidden in two different deep logic. How should we share the
user? As a result of this PR, now if both entry functions accept
`context.Context` as the first parameter and we just need to refactor
`GetUserByID` to reuse the user from the context cache. Then it will not
be loaded twice on an HTTP request.
But of course, sometimes we would like to reload an object from the
database, that's why `RemoveContextData` is also exposed.
The core context cache is here. It defines a new context
```go
type cacheContext struct {
ctx context.Context
data map[any]map[any]any
lock sync.RWMutex
}
var cacheContextKey = struct{}{}
func WithCacheContext(ctx context.Context) context.Context {
return context.WithValue(ctx, cacheContextKey, &cacheContext{
ctx: ctx,
data: make(map[any]map[any]any),
})
}
```
Then you can use the below 4 methods to read/write/del the data within
the same context.
```go
func GetContextData(ctx context.Context, tp, key any) any
func SetContextData(ctx context.Context, tp, key, value any)
func RemoveContextData(ctx context.Context, tp, key any)
func GetWithContextCache[T any](ctx context.Context, cacheGroupKey string, cacheTargetID any, f func() (T, error)) (T, error)
```
Then let's take a look at how `system.GetString` implement it.
```go
func GetSetting(ctx context.Context, key string) (string, error) {
return cache.GetWithContextCache(ctx, contextCacheKey, key, func() (string, error) {
return cache.GetString(genSettingCacheKey(key), func() (string, error) {
res, err := GetSettingNoCache(ctx, key)
if err != nil {
return "", err
}
return res.SettingValue, nil
})
})
}
```
First, it will check if context data include the setting object with the
key. If not, it will query from the global cache which may be memory or
a Redis cache. If not, it will get the object from the database. In the
end, if the object gets from the global cache or database, it will be
set into the context cache.
An object stored in the context cache will only be destroyed after the
context disappeared.
As discussed in #22847 the helpers in helpers.less need to have a
separate prefix as they are causing conflicts with fomantic styles
This will allow us to have the `.gt-hidden { display:none !important; }`
style that is needed to for the reverted PR.
Of note in doing this I have noticed that there was already a conflict
with at least one chroma style which this PR now avoids.
I've also added in the `gt-hidden` style that matches the tailwind one
and switched the code that needed it to use that.
Signed-off-by: Andrew Thornton <art27@cantab.net>
---------
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes#19555
Test-Instructions:
https://github.com/go-gitea/gitea/pull/21441#issuecomment-1419438000
This PR implements the mapping of user groups provided by OIDC providers
to orgs teams in Gitea. The main part is a refactoring of the existing
LDAP code to make it usable from different providers.
Refactorings:
- Moved the router auth code from module to service because of import
cycles
- Changed some model methods to take a `Context` parameter
- Moved the mapping code from LDAP to a common location
I've tested it with Keycloak but other providers should work too. The
JSON mapping format is the same as for LDAP.
![grafik](https://user-images.githubusercontent.com/1666336/195634392-3fc540fc-b229-4649-99ac-91ae8e19df2d.png)
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR follows #21535 (and replace #22592)
## Review without space diff
https://github.com/go-gitea/gitea/pull/22678/files?diff=split&w=1
## Purpose of this PR
1. Make git module command completely safe (risky user inputs won't be
passed as argument option anymore)
2. Avoid low-level mistakes like
https://github.com/go-gitea/gitea/pull/22098#discussion_r1045234918
3. Remove deprecated and dirty `CmdArgCheck` function, hide the `CmdArg`
type
4. Simplify code when using git command
## The main idea of this PR
* Move the `git.CmdArg` to the `internal` package, then no other package
except `git` could use it. Then developers could never do
`AddArguments(git.CmdArg(userInput))` any more.
* Introduce `git.ToTrustedCmdArgs`, it's for user-provided and already
trusted arguments. It's only used in a few cases, for example: use git
arguments from config file, help unit test with some arguments.
* Introduce `AddOptionValues` and `AddOptionFormat`, they make code more
clear and simple:
* Before: `AddArguments("-m").AddDynamicArguments(message)`
* After: `AddOptionValues("-m", message)`
* -
* Before: `AddArguments(git.CmdArg(fmt.Sprintf("--author='%s <%s>'",
sig.Name, sig.Email)))`
* After: `AddOptionFormat("--author='%s <%s>'", sig.Name, sig.Email)`
## FAQ
### Why these changes were not done in #21535 ?
#21535 is mainly a search&replace, it did its best to not change too
much logic.
Making the framework better needs a lot of changes, so this separate PR
is needed as the second step.
### The naming of `AddOptionXxx`
According to git's manual, the `--xxx` part is called `option`.
### How can it guarantee that `internal.CmdArg` won't be not misused?
Go's specification guarantees that. Trying to access other package's
internal package causes compilation error.
And, `golangci-lint` also denies the git/internal package. Only the
`git/command.go` can use it carefully.
### There is still a `ToTrustedCmdArgs`, will it still allow developers
to make mistakes and pass untrusted arguments?
Generally speaking, no. Because when using `ToTrustedCmdArgs`, the code
will be very complex (see the changes for examples). Then developers and
reviewers can know that something might be unreasonable.
### Why there was a `CmdArgCheck` and why it's removed?
At the moment of #21535, to reduce unnecessary changes, `CmdArgCheck`
was introduced as a hacky patch. Now, almost all code could be written
as `cmd := NewCommand(); cmd.AddXxx(...)`, then there is no need for
`CmdArgCheck` anymore.
### Why many codes for `signArg == ""` is deleted?
Because in the old code, `signArg` could never be empty string, it's
either `-S[key-id]` or `--no-gpg-sign`. So the `signArg == ""` is just
dead code.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
There are 2 separate flows of creating a user: authentication and source
sync.
When a group filter is defined, source sync ignores group filter, while
authentication respects it.
With this PR I've fixed this behavior, so both flows now apply this
filter when searching users in LDAP in a unified way.
- Unified LDAP group membership lookup for authentication and source
sync flows
- Replaced custom group membership lookup (used for authentication flow)
with an existing listLdapGroupMemberships method (used for source sync
flow)
- Modified listLdapGroupMemberships and getUserAttributeListedInGroup in
a way group lookup could be called separately
- Added user filtering based on a group membership for a source sync
- Added tests to cover this logic
Co-authored-by: Pavel Ezhov <paejov@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Disable this test for the moment because the used imap container image
seems unstable which results in many failed CI builds.
Co-authored-by: Jason Song <i@wolfogre.com>
This PR adds the support for scopes of access tokens, mimicking the
design of GitHub OAuth scopes.
The changes of the core logic are in `models/auth` that `AccessToken`
struct will have a `Scope` field. The normalized (no duplication of
scope), comma-separated scope string will be stored in `access_token`
table in the database.
In `services/auth`, the scope will be stored in context, which will be
used by `reqToken` middleware in API calls. Only OAuth2 tokens will have
granular token scopes, while others like BasicAuth will default to scope
`all`.
A large amount of work happens in `routers/api/v1/api.go` and the
corresponding `tests/integration` tests, that is adding necessary scopes
to each of the API calls as they fit.
- [x] Add `Scope` field to `AccessToken`
- [x] Add access control to all API endpoints
- [x] Update frontend & backend for when creating tokens
- [x] Add a database migration for `scope` column (enable 'all' access
to past tokens)
I'm aiming to complete it before Gitea 1.19 release.
Fixes#4300
This PR introduce glob match for protected branch name. The separator is
`/` and you can use `*` matching non-separator chars and use `**` across
separator.
It also supports input an exist or non-exist branch name as matching
condition and branch name condition has high priority than glob rule.
Should fix#2529 and #15705
screenshots
<img width="1160" alt="image"
src="https://user-images.githubusercontent.com/81045/205651179-ebb5492a-4ade-4bb4-a13c-965e8c927063.png">
Co-authored-by: zeripath <art27@cantab.net>
closes#13585fixes#9067fixes#2386
ref #6226
ref #6219fixes#745
This PR adds support to process incoming emails to perform actions.
Currently I added handling of replies and unsubscribing from
issues/pulls. In contrast to #13585 the IMAP IDLE command is used
instead of polling which results (in my opinion 😉) in cleaner code.
Procedure:
- When sending an issue/pull reply email, a token is generated which is
present in the Reply-To and References header.
- IMAP IDLE waits until a new email arrives
- The token tells which action should be performed
A possible signature and/or reply gets stripped from the content.
I added a new service to the drone pipeline to test the receiving of
incoming mails. If we keep this in, we may test our outgoing emails too
in future.
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- Move the file `compare.go` and `slice.go` to `slice.go`.
- Fix `ExistsInSlice`, it's buggy
- It uses `sort.Search`, so it assumes that the input slice is sorted.
- It passes `func(i int) bool { return slice[i] == target })` to
`sort.Search`, that's incorrect, check the doc of `sort.Search`.
- Conbine `IsInt64InSlice(int64, []int64)` and `ExistsInSlice(string,
[]string)` to `SliceContains[T]([]T, T)`.
- Conbine `IsSliceInt64Eq([]int64, []int64)` and `IsEqualSlice([]string,
[]string)` to `SliceSortedEqual[T]([]T, T)`.
- Add `SliceEqual[T]([]T, T)` as a distinction from
`SliceSortedEqual[T]([]T, T)`.
- Redesign `RemoveIDFromList([]int64, int64) ([]int64, bool)` to
`SliceRemoveAll[T]([]T, T) []T`.
- Add `SliceContainsFunc[T]([]T, func(T) bool)` and
`SliceRemoveAllFunc[T]([]T, func(T) bool)` for general use.
- Add comments to explain why not `golang.org/x/exp/slices`.
- Add unit tests.
This puts the fuzz tests in the same directory as other tests and eases
the integration in OSS-Fuzz
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
After #22362, we can feel free to use transactions without
`db.DefaultContext`.
And there are still lots of models using `db.DefaultContext`, I think we
should refactor them carefully and one by one.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
There are repeated failures with this test which appear related to
failures in getTokenForLoggedInUser. It is difficult to further evaluate
the cause of these failures as we do not get given further information.
This PR will attempt to fix this.
First it adds some extra logging and it uses the csrf cookie primarily
for the csrf value.
If the problem does not occur again with those changes we could merge,
assume that it is fixed and hope that if it occurs in future the
additional logging will be helpful.
If not I will add more changes in attempt to fix.
Fix#22105
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: techknowlogick <matti@mdranta.net>
Co-authored-by: delvh <dev.lh@web.de>
- There have been [CI
failures](https://codeberg.org/forgejo/forgejo/issues/111) in this
specific test function. The code on itself looks good, the CI failures
are likely caused by not specifying any field in `TeamUser`, which might
have caused to unittest to return another `TeamUser` than the code
expects.
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Close#14601Fix#3690
Revive of #14601.
Updated to current code, cleanup and added more read/write checks.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andre Bruch <ab@andrebruch.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Norwin <git@nroo.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Change all license headers to comply with REUSE specification.
Fix#16132
Co-authored-by: flynnnnnnnnnn <flynnnnnnnnnn@github>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
This PR addresses #19586
I added a mutex to the upload version creation which will prevent the
push errors when two requests try to create these database entries. I'm
not sure if this should be the final solution for this problem.
I added a workaround to allow a reupload of missing blobs. Normally a
reupload is skipped because the database knows the blob is already
present. The workaround checks if the blob exists on the file system.
This should not be needed anymore with the above fix so I marked this
code to be removed with Gitea v1.20.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR adds a context parameter to a bunch of methods. Some helper
`xxxCtx()` methods got replaced with the normal name now.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Related #20471
This PR adds global quota limits for the package registry. Settings for
individual users/orgs can be added in a seperate PR using the settings
table.
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This addresses #21707 and adds a second package test case for a
non-semver compatible version (this might be overkill though since you
could also edit the old package version to have an epoch in front and
see the error, this just seemed more flexible for the future).
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Close https://github.com/go-gitea/gitea/issues/21640
Before: Gitea can create users like ".xxx" or "x..y", which is not
ideal, it's already a consensus that dot filenames have special
meanings, and `a..b` is a confusing name when doing cross repo compare.
After: stricter
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: delvh <dev.lh@web.de>
_This is a different approach to #20267, I took the liberty of adapting
some parts, see below_
## Context
In some cases, a weebhook endpoint requires some kind of authentication.
The usual way is by sending a static `Authorization` header, with a
given token. For instance:
- Matrix expects a `Bearer <token>` (already implemented, by storing the
header cleartext in the metadata - which is buggy on retry #19872)
- TeamCity #18667
- Gitea instances #20267
- SourceHut https://man.sr.ht/graphql.md#authentication-strategies (this
is my actual personal need :)
## Proposed solution
Add a dedicated encrypt column to the webhook table (instead of storing
it as meta as proposed in #20267), so that it gets available for all
present and future hook types (especially the custom ones #19307).
This would also solve the buggy matrix retry #19872.
As a first step, I would recommend focusing on the backend logic and
improve the frontend at a later stage. For now the UI is a simple
`Authorization` field (which could be later customized with `Bearer` and
`Basic` switches):
![2022-08-23-142911](https://user-images.githubusercontent.com/3864879/186162483-5b721504-eef5-4932-812e-eb96a68494cc.png)
The header name is hard-coded, since I couldn't fine any usecase
justifying otherwise.
## Questions
- What do you think of this approach? @justusbunsi @Gusted @silverwind
- ~~How are the migrations generated? Do I have to manually create a new
file, or is there a command for that?~~
- ~~I started adding it to the API: should I complete it or should I
drop it? (I don't know how much the API is actually used)~~
## Done as well:
- add a migration for the existing matrix webhooks and remove the
`Authorization` logic there
_Closes #19872_
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: delvh <dev.lh@web.de>
I found myself wondering whether a PR I scheduled for automerge was
actually merged. It was, but I didn't receive a mail notification for it
- that makes sense considering I am the doer and usually don't want to
receive such notifications. But ideally I want to receive a notification
when a PR was merged because I scheduled it for automerge.
This PR implements exactly that.
The implementation works, but I wonder if there's a way to avoid passing
the "This PR was automerged" state down so much. I tried solving this
via the database (checking if there's an automerge scheduled for this PR
when sending the notification) but that did not work reliably, probably
because sending the notification happens async and the entry might have
already been deleted. My implementation might be the most
straightforward but maybe not the most elegant.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
I noticed an admin is not allowed to upload packages for other users
because `ctx.IsSigned` was not set.
I added a check for `user.IsActive` and `user.ProhibitLogin` too because
both was not checked. Tests enforce this now.
Co-authored-by: Lauris BH <lauris@nix.lv>
The OAuth spec [defines two types of
client](https://datatracker.ietf.org/doc/html/rfc6749#section-2.1),
confidential and public. Previously Gitea assumed all clients to be
confidential.
> OAuth defines two client types, based on their ability to authenticate
securely with the authorization server (i.e., ability to
> maintain the confidentiality of their client credentials):
>
> confidential
> Clients capable of maintaining the confidentiality of their
credentials (e.g., client implemented on a secure server with
> restricted access to the client credentials), or capable of secure
client authentication using other means.
>
> **public
> Clients incapable of maintaining the confidentiality of their
credentials (e.g., clients executing on the device used by the resource
owner, such as an installed native application or a web browser-based
application), and incapable of secure client authentication via any
other means.**
>
> The client type designation is based on the authorization server's
definition of secure authentication and its acceptable exposure levels
of client credentials. The authorization server SHOULD NOT make
assumptions about the client type.
https://datatracker.ietf.org/doc/html/rfc8252#section-8.4
> Authorization servers MUST record the client type in the client
registration details in order to identify and process requests
accordingly.
Require PKCE for public clients:
https://datatracker.ietf.org/doc/html/rfc8252#section-8.1
> Authorization servers SHOULD reject authorization requests from native
apps that don't use PKCE by returning an error message
Fixes#21299
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
According to the OAuth spec
https://datatracker.ietf.org/doc/html/rfc6749#section-6 when "Refreshing
an Access Token"
> The authorization server MUST ... require client authentication for
confidential clients
Fixes#21418
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Added checks for logged user token.
Some builds fail at unrelated tests, due to missing token.
Example:
https://drone.gitea.io/go-gitea/gitea/62011/2/14
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
depends on #18871
Added some api integration tests to help testing of #18798.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
NuGet symbol file lookup returned 404 on Visual Studio 2019 due to
case-sensitive api router. The api router should accept case-insensitive GUID.
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes#21250
Related #20414
Conan packages don't have to follow SemVer.
The migration fixes the setting for all existing Conan and Generic
(#20414) packages.
This adds an api endpoint `/files` to PRs that allows to get a list of changed files.
built upon #18228, reviews there are included
closes https://github.com/go-gitea/gitea/issues/654
Co-authored-by: Anton Bracke <anton@ju60.de>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
This PR would presumably
Fix#20522Fix#18773Fix#19069Fix#21077Fix#13622
-----
1. Check whether unit type is currently enabled
2. Check if it _will_ be enabled via opt
3. Allow modification as necessary
Signed-off-by: jolheiser <john.olheiser@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
Addition to #20734, Fixes#20717
The `/index.json` endpoint needs to be accessible even if the registry
is private. The NuGet client uses this endpoint without
authentification.
The old fix only works if the NuGet cli is used with `--source <name>`
but not with `--source <url>/index.json`.
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Close#20098, in the NPM registry API, implemented to match what's described by https://github.com/npm/registry/blob/master/docs/REGISTRY-API.md#get-v1search
Currently have only implemented the bare minimum to work with the [Unity Package Manager](https://docs.unity3d.com/Manual/upm-ui.html).
Co-authored-by: Jack Vine <jackv@jack-lemur-suse.cat-prometheus.ts.net>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
The problem was that many PR review components loaded by `Show more`
received the same ID as previous batches, which confuses browsers (when
clicked). All such occurrences should now be fixed.
Additionally improved the background of the `viewed` checkbox.
Lastly, the `go-licenses.json` was automatically updated.
Fixes#21228.
Fixes#20681.
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fix#20863
When REQUIRE_SIGNIN_VIEW = true, even with public repositories, you can only see them after you login. The packages should not be accessed without login.
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes#21206
If user and viewer are equal the method should return true.
Also the common organization check was wrong as `count` can never be
less then 0.
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Remove this small, but unnecessary
[module](https://fomantic-ui.com/elements/image.html) and use `img`
selector over previous `.image`. Did a few tests, could not notice any
visual regression.
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lauris BH <lauris@nix.lv>
The `go-licenses` make task introduced in #21034 is being run on make vendor
and occasionally causes an empty go-licenses file if the vendors need to
change. This should be moved to the generate task as it is a generated file.
Now because of this change we also need to split generation into two separate
steps:
1. `generate-backend`
2. `generate-frontend`
In the future it would probably be useful to make `generate-swagger` part of `generate-frontend` but it's not tolerated with our .drone.yml
Ref #21034
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
A testing cleanup.
This pull request replaces `os.MkdirTemp` with `t.TempDir`. We can use the `T.TempDir` function from the `testing` package to create temporary directory. The directory created by `T.TempDir` is automatically removed when the test and all its subtests complete.
This saves us at least 2 lines (error check, and cleanup) on every instance, or in some cases adds cleanup that we forgot.
Reference: https://pkg.go.dev/testing#T.TempDir
```go
func TestFoo(t *testing.T) {
// before
tmpDir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(tmpDir)
// now
tmpDir := t.TempDir()
}
```
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>