Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
G
gitlab-ce
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Boxiang Sun
gitlab-ce
Commits
4b792b0d
Commit
4b792b0d
authored
Aug 16, 2016
by
Ruben Davila
Browse files
Options
Browse Files
Download
Plain Diff
Merge branch 'master' into 8-11-stable
parents
759265c3
7f853e22
Changes
12
Show whitespace changes
Inline
Side-by-side
Showing
12 changed files
with
208 additions
and
51 deletions
+208
-51
CHANGELOG
CHANGELOG
+3
-0
app/controllers/import/gitlab_projects_controller.rb
app/controllers/import/gitlab_projects_controller.rb
+5
-0
app/views/projects/new.html.haml
app/views/projects/new.html.haml
+1
-1
doc/development/README.md
doc/development/README.md
+4
-0
doc/development/adding_database_indexes.md
doc/development/adding_database_indexes.md
+123
-0
doc/user/project/settings/import_export.md
doc/user/project/settings/import_export.md
+1
-2
features/dashboard/new_project.feature
features/dashboard/new_project.feature
+1
-1
features/steps/dashboard/new_project.rb
features/steps/dashboard/new_project.rb
+1
-2
lib/gitlab/import_export/json_hash_builder.rb
lib/gitlab/import_export/json_hash_builder.rb
+3
-6
spec/features/projects/import_export/import_file_spec.rb
spec/features/projects/import_export/import_file_spec.rb
+60
-38
spec/lib/gitlab/import_export/reader_spec.rb
spec/lib/gitlab/import_export/reader_spec.rb
+2
-1
spec/support/import_export/import_export.yml
spec/support/import_export/import_export.yml
+4
-0
No files found.
CHANGELOG
View file @
4b792b0d
...
...
@@ -117,6 +117,9 @@ v 8.11.0 (unreleased)
- Speed up todos queries by limiting the projects set we join with
- Ensure file editing in UI does not overwrite commited changes without warning user
v 8.10.6 (unreleased)
- Fix import/export configuration missing some included attributes
v 8.10.5
- Add a data migration to fix some missing timestamps in the members table. !5670
- Revert the "Defend against 'Host' header injection" change in the source NGINX templates. !5706
...
...
app/controllers/import/gitlab_projects_controller.rb
View file @
4b792b0d
class
Import::GitlabProjectsController
<
Import
::
BaseController
before_action
:verify_gitlab_project_import_enabled
before_action
:authenticate_admin!
def
new
@namespace_id
=
project_params
[
:namespace_id
]
...
...
@@ -47,4 +48,8 @@ class Import::GitlabProjectsController < Import::BaseController
:path
,
:namespace_id
,
:file
)
end
def
authenticate_admin!
render_404
unless
current_user
.
is_admin?
end
end
app/views/projects/new.html.haml
View file @
4b792b0d
...
...
@@ -77,7 +77,7 @@
=
link_to
"#"
,
class:
'btn js-toggle-button import_git'
do
=
icon
(
'git'
,
text:
'Repo by URL'
)
%div
{
class:
'import_gitlab_project'
}
-
if
gitlab_project_import_enabled?
-
if
gitlab_project_import_enabled?
&&
current_user
.
is_admin?
=
link_to
new_import_gitlab_project_path
,
class:
'btn btn_import_gitlab_project project-submit'
do
=
icon
(
'gitlab'
,
text:
'GitLab export'
)
...
...
doc/development/README.md
View file @
4b792b0d
...
...
@@ -30,7 +30,11 @@
-
[
Rake tasks
](
rake_tasks.md
)
for development
-
[
Shell commands
](
shell_commands.md
)
in the GitLab codebase
-
[
Sidekiq debugging
](
sidekiq_debugging.md
)
## Databases
-
[
What requires downtime?
](
what_requires_downtime.md
)
-
[
Adding database indexes
](
adding_database_indexes.md
)
## Compliance
...
...
doc/development/adding_database_indexes.md
0 → 100644
View file @
4b792b0d
# Adding Database Indexes
Indexes can be used to speed up database queries, but when should you add a new
index? Traditionally the answer to this question has been to add an index for
every column used for filtering or joining data. For example, consider the
following query:
```
sql
SELECT
*
FROM
projects
WHERE
user_id
=
2
;
```
Here we are filtering by the
`user_id`
column and as such a developer may decide
to index this column.
While in certain cases indexing columns using the above approach may make sense
it can actually have a negative impact. Whenever you write data to a table any
existing indexes need to be updated. The more indexes there are the slower this
can potentially become. Indexes can also take up quite some disk space depending
on the amount of data indexed and the index type. For example, PostgreSQL offers
"GIN" indexes which can be used to index certain data types that can not be
indexed by regular btree indexes. These indexes however generally take up more
data and are slower to update compared to btree indexes.
Because of all this one should not blindly add a new index for every column used
to filter data by. Instead one should ask themselves the following questions:
1.
Can I write my query in such a way that it re-uses as many existing indexes
as possible?
2.
Is the data going to be large enough that using an index will actually be
faster than just iterating over the rows in the table?
3.
Is the overhead of maintaining the index worth the reduction in query
timings?
We'll explore every question in detail below.
## Re-using Queries
The first step is to make sure your query re-uses as many existing indexes as
possible. For example, consider the following query:
```
sql
SELECT
*
FROM
todos
WHERE
user_id
=
123
AND
state
=
'open'
;
```
Now imagine we already have an index on the
`user_id`
column but not on the
`state`
column. One may think this query will perform badly due to
`state`
being
unindexed. In reality the query may perform just fine given the index on
`user_id`
can filter out enough rows.
The best way to determine if indexes are re-used is to run your query using
`EXPLAIN ANALYZE`
. Depending on any extra tables that may be joined and
other columns being used for filtering you may find an extra index is not going
to make much (if any) difference. On the other hand you may determine that the
index _may_ make a difference.
In short:
1.
Try to write your query in such a way that it re-uses as many existing
indexes as possible.
2.
Run the query using
`EXPLAIN ANALYZE`
and study the output to find the most
ideal query.
## Data Size
A database may decide not to use an index despite it existing in case a regular
sequence scan (= simply iterating over all existing rows) is faster. This is
especially the case for small tables.
If a table is expected to grow in size and you expect your query has to filter
out a lot of rows you may want to consider adding an index. If the table size is
very small (e.g. only a handful of rows) or any existing indexes filter out
enough rows you may _not_ want to add a new index.
## Maintenance Overhead
Indexes have to be updated on every table write. In case of PostgreSQL _all_
existing indexes will be updated whenever data is written to a table. As a
result of this having many indexes on the same table will slow down writes.
Because of this one should ask themselves: is the reduction in query performance
worth the overhead of maintaining an extra index?
If adding an index reduces SELECT timings by 5 milliseconds but increases
INSERT/UPDATE/DELETE timings by 10 milliseconds then the index may not be worth
it. On the other hand, if SELECT timings are reduced but INSERT/UPDATE/DELETE
timings are not affected you may want to add the index after all.
## Finding Unused Indexes
To see which indexes are unused you can run the following query:
```
sql
SELECT
relname
as
table_name
,
indexrelname
as
index_name
,
idx_scan
,
idx_tup_read
,
idx_tup_fetch
,
pg_size_pretty
(
pg_relation_size
(
indexrelname
::
regclass
))
FROM
pg_stat_all_indexes
WHERE
schemaname
=
'public'
AND
idx_scan
=
0
AND
idx_tup_read
=
0
AND
idx_tup_fetch
=
0
ORDER
BY
pg_relation_size
(
indexrelname
::
regclass
)
desc
;
```
This query outputs a list containing all indexes that are never used and sorts
them by indexes sizes in descending order. This query can be useful to
determine if any previously indexes are useful after all. More information on
the meaning of the various columns can be found at
<https://www.postgresql.org/docs/current/static/monitoring-stats.html>
.
Because the output of this query relies on the actual usage of your database it
may be affected by factors such as (but not limited to):
*
Certain queries never being executed, thus not being able to use certain
indexes.
*
Certain tables having little data, resulting in PostgreSQL using sequence
scans instead of index scans.
In other words, this data is only reliable for a frequently used database with
plenty of data and with as many GitLab features enabled (and being used) as
possible.
doc/user/project/settings/import_export.md
View file @
4b792b0d
...
...
@@ -7,8 +7,7 @@
> than that of the exporter.
> - For existing installations, the project import option has to be enabled in
> application settings (`/admin/application_settings`) under 'Import sources'.
> Ask your administrator if you don't see the **GitLab export** button when
> creating a new project.
> You will have to be an administrator to enable and use the import functionality.
> - You can find some useful raketasks if you are an administrator in the
> [import_export](../../../administration/raketasks/project_import_export.md)
> raketask.
...
...
features/dashboard/new_project.feature
View file @
4b792b0d
...
...
@@ -9,7 +9,7 @@ Background:
@javascript
Scenario
:
I
should see New Projects page
Then
I see
"New Project"
page
Then
I see all possible import optios
Then
I see all possible import optio
n
s
@javascript
Scenario
:
I
should see instructions on how to import from Git URL
...
...
features/steps/dashboard/new_project.rb
View file @
4b792b0d
...
...
@@ -14,14 +14,13 @@ class Spinach::Features::NewProject < Spinach::FeatureSteps
expect
(
page
).
to
have_content
(
'Project name'
)
end
step
'I see all possible import optios'
do
step
'I see all possible import optio
n
s'
do
expect
(
page
).
to
have_link
(
'GitHub'
)
expect
(
page
).
to
have_link
(
'Bitbucket'
)
expect
(
page
).
to
have_link
(
'GitLab.com'
)
expect
(
page
).
to
have_link
(
'Gitorious.org'
)
expect
(
page
).
to
have_link
(
'Google Code'
)
expect
(
page
).
to
have_link
(
'Repo by URL'
)
expect
(
page
).
to
have_link
(
'GitLab export'
)
end
step
'I click on "Import project from GitHub"'
do
...
...
lib/gitlab/import_export/json_hash_builder.rb
View file @
4b792b0d
...
...
@@ -57,19 +57,16 @@ module Gitlab
# +value+ existing model to be included in the hash
# +json_config_hash+ the original hash containing the root model
def
create_model_value
(
current_key
,
value
,
json_config_hash
)
parsed_hash
=
{
include:
value
}
parse_hash
(
value
,
parsed_hash
)
json_config_hash
[
current_key
]
=
parsed_hash
json_config_hash
[
current_key
]
=
parse_hash
(
value
)
||
{
include:
value
}
end
# Calls attributes finder to parse the hash and add any attributes to it
#
# +value+ existing model to be included in the hash
# +parsed_hash+ the original hash
def
parse_hash
(
value
,
parsed_hash
)
def
parse_hash
(
value
)
@attributes_finder
.
parse
(
value
)
do
|
hash
|
parsed_hash
=
{
include:
hash_or_merge
(
value
,
hash
)
}
{
include:
hash_or_merge
(
value
,
hash
)
}
end
end
...
...
spec/features/projects/import_export/import_file_spec.rb
View file @
4b792b0d
...
...
@@ -3,8 +3,9 @@ require 'spec_helper'
feature
'project import'
,
feature:
true
,
js:
true
do
include
Select2Helper
let
(
:user
)
{
create
(
:admin
)
}
let!
(
:namespace
)
{
create
(
:namespace
,
name:
"asd"
,
owner:
user
)
}
let
(
:admin
)
{
create
(
:admin
)
}
let
(
:normal_user
)
{
create
(
:user
)
}
let!
(
:namespace
)
{
create
(
:namespace
,
name:
"asd"
,
owner:
admin
)
}
let
(
:file
)
{
File
.
join
(
Rails
.
root
,
'spec'
,
'features'
,
'projects'
,
'import_export'
,
'test_project_export.tar.gz'
)
}
let
(
:export_path
)
{
"
#{
Dir
::
tmpdir
}
/import_file_spec"
}
let
(
:project
)
{
Project
.
last
}
...
...
@@ -12,13 +13,17 @@ feature 'project import', feature: true, js: true do
background
do
allow_any_instance_of
(
Gitlab
::
ImportExport
).
to
receive
(
:storage_path
).
and_return
(
export_path
)
login_as
(
user
)
end
after
(
:each
)
do
FileUtils
.
rm_rf
(
export_path
,
secure:
true
)
end
context
'admin user'
do
before
do
login_as
(
admin
)
end
scenario
'user imports an exported project successfully'
do
expect
(
Project
.
all
.
count
).
to
be_zero
...
...
@@ -74,6 +79,23 @@ feature 'project import', feature: true, js: true do
expect
(
page
).
to
have_content
(
'Please enter path and name'
)
end
end
end
context
'normal user'
do
before
do
login_as
(
normal_user
)
end
scenario
'non-admin user is not allowed to import a project'
do
expect
(
Project
.
all
.
count
).
to
be_zero
visit
new_project_path
fill_in
:project_path
,
with:
'test-project-path'
,
visible:
true
expect
(
page
).
not_to
have_content
(
'GitLab export'
)
end
end
def
wiki_exists?
wiki
=
ProjectWiki
.
new
(
project
)
...
...
spec/lib/gitlab/import_export/reader_spec.rb
View file @
4b792b0d
...
...
@@ -12,7 +12,8 @@ describe Gitlab::ImportExport::Reader, lib: true do
except:
[
:iid
],
include:
[
:merge_request_diff
,
:merge_request_test
]
}
},
{
commit_statuses:
{
include: :commit
}
}]
{
commit_statuses:
{
include: :commit
}
},
{
project_members:
{
include:
{
user:
{
only:
[
:email
]
}
}
}
}]
}
end
...
...
spec/support/import_export/import_export.yml
View file @
4b792b0d
...
...
@@ -7,6 +7,8 @@ project_tree:
-
:merge_request_test
-
commit_statuses
:
-
:commit
-
project_members
:
-
:user
included_attributes
:
project
:
...
...
@@ -14,6 +16,8 @@ included_attributes:
-
:path
merge_requests
:
-
:id
user
:
-
:email
excluded_attributes
:
merge_requests
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment