Commit 4b792b0d authored by Ruben Davila's avatar Ruben Davila

Merge branch 'master' into 8-11-stable

parents 759265c3 7f853e22
......@@ -117,6 +117,9 @@ v 8.11.0 (unreleased)
- Speed up todos queries by limiting the projects set we join with
- Ensure file editing in UI does not overwrite commited changes without warning user
v 8.10.6 (unreleased)
- Fix import/export configuration missing some included attributes
v 8.10.5
- Add a data migration to fix some missing timestamps in the members table. !5670
- Revert the "Defend against 'Host' header injection" change in the source NGINX templates. !5706
......
class Import::GitlabProjectsController < Import::BaseController
before_action :verify_gitlab_project_import_enabled
before_action :authenticate_admin!
def new
@namespace_id = project_params[:namespace_id]
......@@ -47,4 +48,8 @@ class Import::GitlabProjectsController < Import::BaseController
:path, :namespace_id, :file
)
end
def authenticate_admin!
render_404 unless current_user.is_admin?
end
end
......@@ -77,7 +77,7 @@
= link_to "#", class: 'btn js-toggle-button import_git' do
= icon('git', text: 'Repo by URL')
%div{ class: 'import_gitlab_project' }
- if gitlab_project_import_enabled?
- if gitlab_project_import_enabled? && current_user.is_admin?
= link_to new_import_gitlab_project_path, class: 'btn btn_import_gitlab_project project-submit' do
= icon('gitlab', text: 'GitLab export')
......
......@@ -30,7 +30,11 @@
- [Rake tasks](rake_tasks.md) for development
- [Shell commands](shell_commands.md) in the GitLab codebase
- [Sidekiq debugging](sidekiq_debugging.md)
## Databases
- [What requires downtime?](what_requires_downtime.md)
- [Adding database indexes](adding_database_indexes.md)
## Compliance
......
# Adding Database Indexes
Indexes can be used to speed up database queries, but when should you add a new
index? Traditionally the answer to this question has been to add an index for
every column used for filtering or joining data. For example, consider the
following query:
```sql
SELECT *
FROM projects
WHERE user_id = 2;
```
Here we are filtering by the `user_id` column and as such a developer may decide
to index this column.
While in certain cases indexing columns using the above approach may make sense
it can actually have a negative impact. Whenever you write data to a table any
existing indexes need to be updated. The more indexes there are the slower this
can potentially become. Indexes can also take up quite some disk space depending
on the amount of data indexed and the index type. For example, PostgreSQL offers
"GIN" indexes which can be used to index certain data types that can not be
indexed by regular btree indexes. These indexes however generally take up more
data and are slower to update compared to btree indexes.
Because of all this one should not blindly add a new index for every column used
to filter data by. Instead one should ask themselves the following questions:
1. Can I write my query in such a way that it re-uses as many existing indexes
as possible?
2. Is the data going to be large enough that using an index will actually be
faster than just iterating over the rows in the table?
3. Is the overhead of maintaining the index worth the reduction in query
timings?
We'll explore every question in detail below.
## Re-using Queries
The first step is to make sure your query re-uses as many existing indexes as
possible. For example, consider the following query:
```sql
SELECT *
FROM todos
WHERE user_id = 123
AND state = 'open';
```
Now imagine we already have an index on the `user_id` column but not on the
`state` column. One may think this query will perform badly due to `state` being
unindexed. In reality the query may perform just fine given the index on
`user_id` can filter out enough rows.
The best way to determine if indexes are re-used is to run your query using
`EXPLAIN ANALYZE`. Depending on any extra tables that may be joined and
other columns being used for filtering you may find an extra index is not going
to make much (if any) difference. On the other hand you may determine that the
index _may_ make a difference.
In short:
1. Try to write your query in such a way that it re-uses as many existing
indexes as possible.
2. Run the query using `EXPLAIN ANALYZE` and study the output to find the most
ideal query.
## Data Size
A database may decide not to use an index despite it existing in case a regular
sequence scan (= simply iterating over all existing rows) is faster. This is
especially the case for small tables.
If a table is expected to grow in size and you expect your query has to filter
out a lot of rows you may want to consider adding an index. If the table size is
very small (e.g. only a handful of rows) or any existing indexes filter out
enough rows you may _not_ want to add a new index.
## Maintenance Overhead
Indexes have to be updated on every table write. In case of PostgreSQL _all_
existing indexes will be updated whenever data is written to a table. As a
result of this having many indexes on the same table will slow down writes.
Because of this one should ask themselves: is the reduction in query performance
worth the overhead of maintaining an extra index?
If adding an index reduces SELECT timings by 5 milliseconds but increases
INSERT/UPDATE/DELETE timings by 10 milliseconds then the index may not be worth
it. On the other hand, if SELECT timings are reduced but INSERT/UPDATE/DELETE
timings are not affected you may want to add the index after all.
## Finding Unused Indexes
To see which indexes are unused you can run the following query:
```sql
SELECT relname as table_name, indexrelname as index_name, idx_scan, idx_tup_read, idx_tup_fetch, pg_size_pretty(pg_relation_size(indexrelname::regclass))
FROM pg_stat_all_indexes
WHERE schemaname = 'public'
AND idx_scan = 0
AND idx_tup_read = 0
AND idx_tup_fetch = 0
ORDER BY pg_relation_size(indexrelname::regclass) desc;
```
This query outputs a list containing all indexes that are never used and sorts
them by indexes sizes in descending order. This query can be useful to
determine if any previously indexes are useful after all. More information on
the meaning of the various columns can be found at
<https://www.postgresql.org/docs/current/static/monitoring-stats.html>.
Because the output of this query relies on the actual usage of your database it
may be affected by factors such as (but not limited to):
* Certain queries never being executed, thus not being able to use certain
indexes.
* Certain tables having little data, resulting in PostgreSQL using sequence
scans instead of index scans.
In other words, this data is only reliable for a frequently used database with
plenty of data and with as many GitLab features enabled (and being used) as
possible.
......@@ -7,8 +7,7 @@
> than that of the exporter.
> - For existing installations, the project import option has to be enabled in
> application settings (`/admin/application_settings`) under 'Import sources'.
> Ask your administrator if you don't see the **GitLab export** button when
> creating a new project.
> You will have to be an administrator to enable and use the import functionality.
> - You can find some useful raketasks if you are an administrator in the
> [import_export](../../../administration/raketasks/project_import_export.md)
> raketask.
......
......@@ -9,7 +9,7 @@ Background:
@javascript
Scenario: I should see New Projects page
Then I see "New Project" page
Then I see all possible import optios
Then I see all possible import options
@javascript
Scenario: I should see instructions on how to import from Git URL
......
......@@ -14,14 +14,13 @@ class Spinach::Features::NewProject < Spinach::FeatureSteps
expect(page).to have_content('Project name')
end
step 'I see all possible import optios' do
step 'I see all possible import options' do
expect(page).to have_link('GitHub')
expect(page).to have_link('Bitbucket')
expect(page).to have_link('GitLab.com')
expect(page).to have_link('Gitorious.org')
expect(page).to have_link('Google Code')
expect(page).to have_link('Repo by URL')
expect(page).to have_link('GitLab export')
end
step 'I click on "Import project from GitHub"' do
......
......@@ -57,19 +57,16 @@ module Gitlab
# +value+ existing model to be included in the hash
# +json_config_hash+ the original hash containing the root model
def create_model_value(current_key, value, json_config_hash)
parsed_hash = { include: value }
parse_hash(value, parsed_hash)
json_config_hash[current_key] = parsed_hash
json_config_hash[current_key] = parse_hash(value) || { include: value }
end
# Calls attributes finder to parse the hash and add any attributes to it
#
# +value+ existing model to be included in the hash
# +parsed_hash+ the original hash
def parse_hash(value, parsed_hash)
def parse_hash(value)
@attributes_finder.parse(value) do |hash|
parsed_hash = { include: hash_or_merge(value, hash) }
{ include: hash_or_merge(value, hash) }
end
end
......
......@@ -3,8 +3,9 @@ require 'spec_helper'
feature 'project import', feature: true, js: true do
include Select2Helper
let(:user) { create(:admin) }
let!(:namespace) { create(:namespace, name: "asd", owner: user) }
let(:admin) { create(:admin) }
let(:normal_user) { create(:user) }
let!(:namespace) { create(:namespace, name: "asd", owner: admin) }
let(:file) { File.join(Rails.root, 'spec', 'features', 'projects', 'import_export', 'test_project_export.tar.gz') }
let(:export_path) { "#{Dir::tmpdir}/import_file_spec" }
let(:project) { Project.last }
......@@ -12,66 +13,87 @@ feature 'project import', feature: true, js: true do
background do
allow_any_instance_of(Gitlab::ImportExport).to receive(:storage_path).and_return(export_path)
login_as(user)
end
after(:each) do
FileUtils.rm_rf(export_path, secure: true)
end
scenario 'user imports an exported project successfully' do
expect(Project.all.count).to be_zero
context 'admin user' do
before do
login_as(admin)
end
visit new_project_path
scenario 'user imports an exported project successfully' do
expect(Project.all.count).to be_zero
select2('2', from: '#project_namespace_id')
fill_in :project_path, with: 'test-project-path', visible: true
click_link 'GitLab export'
visit new_project_path
expect(page).to have_content('GitLab project export')
expect(URI.parse(current_url).query).to eq('namespace_id=2&path=test-project-path')
select2('2', from: '#project_namespace_id')
fill_in :project_path, with: 'test-project-path', visible: true
click_link 'GitLab export'
attach_file('file', file)
expect(page).to have_content('GitLab project export')
expect(URI.parse(current_url).query).to eq('namespace_id=2&path=test-project-path')
click_on 'Import project' # import starts
attach_file('file', file)
expect(project).not_to be_nil
expect(project.issues).not_to be_empty
expect(project.merge_requests).not_to be_empty
expect(project_hook).to exist
expect(wiki_exists?).to be true
expect(project.import_status).to eq('finished')
end
click_on 'Import project' # import starts
expect(project).not_to be_nil
expect(project.issues).not_to be_empty
expect(project.merge_requests).not_to be_empty
expect(project_hook).to exist
expect(wiki_exists?).to be true
expect(project.import_status).to eq('finished')
end
scenario 'invalid project' do
project = create(:project, namespace_id: 2)
scenario 'invalid project' do
project = create(:project, namespace_id: 2)
visit new_project_path
visit new_project_path
select2('2', from: '#project_namespace_id')
fill_in :project_path, with: project.name, visible: true
click_link 'GitLab export'
select2('2', from: '#project_namespace_id')
fill_in :project_path, with: project.name, visible: true
click_link 'GitLab export'
attach_file('file', file)
click_on 'Import project'
attach_file('file', file)
click_on 'Import project'
page.within('.flash-container') do
expect(page).to have_content('Project could not be imported')
page.within('.flash-container') do
expect(page).to have_content('Project could not be imported')
end
end
scenario 'project with no name' do
create(:project, namespace_id: 2)
visit new_project_path
select2('2', from: '#project_namespace_id')
# click on disabled element
find(:link, 'GitLab export').trigger('click')
page.within('.flash-container') do
expect(page).to have_content('Please enter path and name')
end
end
end
scenario 'project with no name' do
create(:project, namespace_id: 2)
context 'normal user' do
before do
login_as(normal_user)
end
visit new_project_path
scenario 'non-admin user is not allowed to import a project' do
expect(Project.all.count).to be_zero
select2('2', from: '#project_namespace_id')
visit new_project_path
# click on disabled element
find(:link, 'GitLab export').trigger('click')
fill_in :project_path, with: 'test-project-path', visible: true
page.within('.flash-container') do
expect(page).to have_content('Please enter path and name')
expect(page).not_to have_content('GitLab export')
end
end
......
......@@ -12,7 +12,8 @@ describe Gitlab::ImportExport::Reader, lib: true do
except: [:iid],
include: [:merge_request_diff, :merge_request_test]
} },
{ commit_statuses: { include: :commit } }]
{ commit_statuses: { include: :commit } },
{ project_members: { include: { user: { only: [:email] } } } }]
}
end
......
......@@ -7,6 +7,8 @@ project_tree:
- :merge_request_test
- commit_statuses:
- :commit
- project_members:
- :user
included_attributes:
project:
......@@ -14,6 +16,8 @@ included_attributes:
- :path
merge_requests:
- :id
user:
- :email
excluded_attributes:
merge_requests:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment