Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
slapos
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Dmitry Blinov
slapos
Commits
e80853a2
Commit
e80853a2
authored
Feb 19, 2014
by
Marco Mariani
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
hadoop, moved to stack
parent
b9f5d141
Changes
17
Hide whitespace changes
Inline
Side-by-side
Showing
17 changed files
with
297 additions
and
177 deletions
+297
-177
software/hadoop-demo/gutenberg/data-download.sh
software/hadoop-demo/gutenberg/data-download.sh
+17
-0
software/hadoop-demo/gutenberg/mapper.py
software/hadoop-demo/gutenberg/mapper.py
+0
-0
software/hadoop-demo/gutenberg/put-files.sh.in
software/hadoop-demo/gutenberg/put-files.sh.in
+17
-0
software/hadoop-demo/gutenberg/reducer.py
software/hadoop-demo/gutenberg/reducer.py
+0
-0
software/hadoop-demo/gutenberg/run.sh.in
software/hadoop-demo/gutenberg/run.sh.in
+11
-0
software/hadoop-demo/instance.cfg.in
software/hadoop-demo/instance.cfg.in
+107
-0
software/hadoop-demo/software.cfg
software/hadoop-demo/software.cfg
+24
-0
software/hadoop-demo/wikipedia/data-download.sh
software/hadoop-demo/wikipedia/data-download.sh
+42
-0
software/hadoop-demo/wikipedia/mapper.py
software/hadoop-demo/wikipedia/mapper.py
+1
-16
software/hadoop-demo/wikipedia/put-files.sh.in
software/hadoop-demo/wikipedia/put-files.sh.in
+17
-0
software/hadoop-demo/wikipedia/reducer.py
software/hadoop-demo/wikipedia/reducer.py
+1
-0
software/hadoop-demo/wikipedia/run.sh.in
software/hadoop-demo/wikipedia/run.sh.in
+9
-0
software/hadoop/software.cfg
software/hadoop/software.cfg
+0
-111
software/hadoop/template/bin/run-demo.sh.in
software/hadoop/template/bin/run-demo.sh.in
+0
-5
stack/hadoop.cfg
stack/hadoop.cfg
+0
-8
stack/hadoop/buildout.cfg
stack/hadoop/buildout.cfg
+50
-0
stack/hadoop/instance-stack.cfg.in
stack/hadoop/instance-stack.cfg.in
+1
-37
No files found.
software/hadoop-demo/gutenberg/data-download.sh
0 → 100644
View file @
e80853a2
#!/bin/bash
.
environment.sh
DIR
=
var/gutenberg/raw-data
mkdir
-p
$DIR
wget
-P
$DIR
-c
http://www.gutenberg.org/cache/epub/103/pg103.txt
wget
-P
$DIR
-c
http://www.gutenberg.org/cache/epub/18857/pg18857.txt
wget
-P
$DIR
-c
http://www.gutenberg.org/cache/epub/2488/pg2488.txt
wget
-P
$DIR
-c
http://www.gutenberg.org/cache/epub/164/pg164.txt
wget
-P
$DIR
-c
http://www.gutenberg.org/cache/epub/1268/pg1268.txt
wget
-P
$DIR
-c
http://www.gutenberg.org/cache/epub/800/pg800.txt
wget
-P
$DIR
-c
http://www.gutenberg.org/cache/epub/4791/pg4791.txt
wget
-P
$DIR
-c
http://www.gutenberg.org/cache/epub/3526/pg3526.txt
wget
-P
$DIR
-c
http://www.gutenberg.org/cache/epub/2083/pg2083.txt
software/hadoop
/template/bin/gutenberg-mapper.py.in
→
software/hadoop
-demo/gutenberg/mapper.py
View file @
e80853a2
File moved
software/hadoop
/template/bin
/put-files.sh.in
→
software/hadoop
-demo/gutenberg
/put-files.sh.in
View file @
e80853a2
...
...
@@ -5,12 +5,12 @@
source
environment.sh
hdfs dfs
-mkdir
gutenberg
hdfs dfs
-mkdir
var/gutenberg/input
RAW_DATA
=
${
buildout
:directory
}
/software_release/gutenberg
RAW_DATA
=
var/gutenberg/raw-data
for
file
in
`
ls
$RAW_DATA
`
;
do
hdfs dfs
-put
$RAW_DATA
/
$file
gutenberg/
hdfs dfs
-put
$RAW_DATA
/
$file
var/gutenberg/input
done
...
...
software/hadoop
/template/bin/gutenberg-reducer.py.in
→
software/hadoop
-demo/gutenberg/reducer.py
View file @
e80853a2
File moved
software/hadoop-demo/gutenberg/run.sh.in
0 → 100644
View file @
e80853a2
#!/bin/bash
# http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
.
environment.sh
hadoop jar software_release/parts/hadoop-streaming/
*
jar
\
-mapper
demo/gutenberg/mapper.py
\
-reducer
demo/gutenberg/reducer.py
\
-input
var/gutenberg/input/
*
\
-output
var/gutenberg/output
software/hadoop-demo/instance.cfg.in
0 → 100644
View file @
e80853a2
[buildout]
extends =
${instance-stack:output}
parts =
sh-environment
start-daemons
deploy-tar
gutenberg-data-download
gutenberg-mapper
gutenberg-reducer
gutenberg-run
gutenberg-put-files
wikipedia-data-download
wikipedia-mapper
wikipedia-reducer
wikipedia-run
wikipedia-put-files
[directories]
demo = $${buildout:directory}/demo
gutenberg = $${:demo}/gutenberg
wikipedia = $${:demo}/wikipedia
[gutenberg-data-download]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/gutenberg/data-download.sh
#md5sum =
output = $${directories:gutenberg}/data-download.sh
mode = 0755
[gutenberg-mapper]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/gutenberg/mapper.py
#md5sum =
output = $${directories:gutenberg}/mapper.py
mode = 0755
[gutenberg-reducer]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/gutenberg/reducer.py
#md5sum =
output = $${directories:gutenberg}/reducer.py
mode = 0755
[gutenberg-run]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/gutenberg/run.sh.in
#md5sum =
output = $${directories:gutenberg}/run.sh
mode = 0755
[gutenberg-put-files]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/gutenberg/put-files.sh.in
#md5sum =
output = $${directories:gutenberg}/put-files.sh
mode = 0755
[wikipedia-data-download]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/wikipedia/data-download.sh
#md5sum =
output = $${directories:wikipedia}/data-download.sh
mode = 0755
[wikipedia-mapper]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/wikipedia/mapper.py
#md5sum =
output = $${directories:wikipedia}/mapper.py
mode = 0755
[wikipedia-reducer]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/wikipedia/reducer.py
#md5sum =
output = $${directories:wikipedia}/reducer.py
mode = 0755
[wikipedia-run]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/wikipedia/run.sh.in
#md5sum =
output = $${directories:wikipedia}/run.sh
mode = 0755
[wikipedia-put-files]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/wikipedia/put-files.sh.in
#md5sum =
output = $${directories:wikipedia}/put-files.sh
mode = 0755
software/hadoop-demo/software.cfg
0 → 100644
View file @
e80853a2
[buildout]
extends =
../../stack/hadoop/buildout.cfg
parts =
slapos-cookbook
eggs
java
hadoop
hadoop-streaming
instance-stack
instance
[instance]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg.in
output = ${buildout:directory}/instance.cfg
# md5sum =
mode = 0644
software/hadoop-demo/wikipedia/data-download.sh
0 → 100644
View file @
e80853a2
#!/bin/bash
.
environment.sh
DIR
=
var/wikipedia/raw-data
mkdir
-p
$DIR
# http://dumps.wikimedia.org/enwiki/20140203/
# All pages, current versions only.
wget
-P
$DIR
-c
http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current1.xml-p000000010p000010000.bz2
wget
-P
$DIR
-c
http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current2.xml-p000010001p000025000.bz2
wget
-P
$DIR
-c
http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current3.xml-p000025001p000055000.bz2
wget
-P
$DIR
-c
http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current4.xml-p000055002p000104998.bz2
wget
-P
$DIR
-c
http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current5.xml-p000105001p000184999.bz2
wget
-P
$DIR
-c
http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current6.xml-p000185003p000305000.bz2
wget
-P
$DIR
-c
http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current7.xml-p000305002p000464997.bz2
wget
-P
$DIR
-c
http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current8.xml-p000465001p000665000.bz2
wget
-P
$DIR
-c
http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current9.xml-p000665001p000925000.bz2
# don't download the full dataset
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current10.xml-p000925001p001325000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current11.xml-p001325001p001825000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current12.xml-p001825001p002425000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current13.xml-p002425001p003124998.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current14.xml-p003125001p003924999.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current15.xml-p003925001p004825000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current16.xml-p004825002p006025000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current17.xml-p006025001p007524997.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current18.xml-p007525002p009225000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current19.xml-p009225001p011125000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current20.xml-p011125001p013324998.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current21.xml-p013325001p015725000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current22.xml-p015725003p018225000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current23.xml-p018225001p020925000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current24.xml-p020925002p023725000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current25.xml-p023725001p026624999.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current26.xml-p026625002p029625000.bz2
# wget -P $DIR -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current27.xml-p029625001p041836446.bz2
software/hadoop
/wikipedia-mapper.py.in
→
software/hadoop
-demo/wikipedia/mapper.py
View file @
e80853a2
#!/usr/bin/env python
import
bz2
import
os
...
...
@@ -39,23 +40,7 @@ def process_xml(input):
parser
.
parse
(
input
)
if
__name__
==
'__main__'
:
input
=
bz2
.
BZ2File
(
'/dev/fd/0'
)
process_xml
(
input
)
# dirname = '/srv/slapgrid/slappart20/srv/runner/instance/slappart0/software_release/raw-data/'
# filenames = os.listdir(dirname)
# # ['enwiki-20140203-pages-meta-current1.xml-p000000010p000010000.bz2']
# for fname in filenames:
# process_xml(os.path.join(dirname, fname))
# input = bz2.BZ2File(process_xml(os.path.join(dirname, fname)))
software/hadoop-demo/wikipedia/put-files.sh.in
0 → 100644
View file @
e80853a2
#!/bin/bash
# exit on error
# set -e
source
environment.sh
hdfs dfs
-mkdir
var/wikipedia/input
RAW_DATA
=
var/wikipedia/raw-data
for
file
in
`
ls
$RAW_DATA
`
;
do
hdfs dfs
-put
$RAW_DATA
/
$file
var/wikipedia/input
done
software/hadoop
/wikipedia-reducer.py.in
→
software/hadoop
-demo/wikipedia/reducer.py
View file @
e80853a2
#!/usr/bin/env python
import
bz2
import
os
...
...
software/hadoop-demo/wikipedia/run.sh.in
0 → 100644
View file @
e80853a2
#!/bin/bash
.
environment.sh
hadoop jar software_release/parts/hadoop-streaming/
*
jar
\
-mapper
demo/wikipedia/mapper.py
\
-reducer
demo/wikipedia/reducer.py
\
-input
var/wikipedia/input/
*
\
-output
var/wikipedia/output
software/hadoop/software.cfg
deleted
100644 → 0
View file @
b9f5d141
[buildout]
extends =
../../stack/slapos.cfg
../../component/java/buildout.cfg
parts =
slapos-cookbook
eggs
java
hadoop
hadoop-streaming
gutenberg-dataset
instance
[eggs]
recipe = zc.recipe.egg
eggs =
slapos.cookbook
collective.recipe.template
cp.recipe.cmd
plone.recipe.command
[hadoop]
recipe = hexagonit.recipe.download
filename = hadoop-2.2.0.tar.gz
url = http://apache.mirrors.spacedump.net/hadoop/common/stable/${:filename}
md5sum = 25f27eb0b5617e47c032319c0bfd9962
download-only = true
mode = 0644
strip-top-level-dir = true
[hadoop-streaming]
recipe = hexagonit.recipe.download
url = http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-streaming/0.20.203.0/hadoop-streaming-0.20.203.0.jar
download-only = true
#md5sum =
mode = 0644
[instance]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg.in
output = ${buildout:directory}/instance.cfg
# md5sum =
mode = 0644
[gutenberg-dataset]
recipe = cp.recipe.cmd
update_cmd = /bin/true
install_cmd =
mkdir -p ${buildout:directory}/gutenberg
cd ${buildout:directory}/gutenberg
wget -c http://www.gutenberg.org/cache/epub/103/pg103.txt
wget -c http://www.gutenberg.org/cache/epub/18857/pg18857.txt
wget -c http://www.gutenberg.org/cache/epub/2488/pg2488.txt
wget -c http://www.gutenberg.org/cache/epub/164/pg164.txt
wget -c http://www.gutenberg.org/cache/epub/1268/pg1268.txt
wget -c http://www.gutenberg.org/cache/epub/800/pg800.txt
wget -c http://www.gutenberg.org/cache/epub/4791/pg4791.txt
wget -c http://www.gutenberg.org/cache/epub/3526/pg3526.txt
wget -c http://www.gutenberg.org/cache/epub/2083/pg2083.txt
#[wikipedia-dataset]
#recipe = cp.recipe.cmd
#update_cmd = /bin/true
##update_cmd = ${:install_cmd}
#install_cmd =
# mkdir -p ${buildout:directory}/raw-data
# cd ${buildout:directory}/raw-data
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current1.xml-p000000010p000010000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current2.xml-p000010001p000025000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current3.xml-p000025001p000055000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current4.xml-p000055002p000104998.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current5.xml-p000105001p000184999.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current6.xml-p000185003p000305000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current7.xml-p000305002p000464997.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current8.xml-p000465001p000665000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current9.xml-p000665001p000925000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current10.xml-p000925001p001325000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current11.xml-p001325001p001825000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current12.xml-p001825001p002425000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current13.xml-p002425001p003124998.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current14.xml-p003125001p003924999.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current15.xml-p003925001p004825000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current16.xml-p004825002p006025000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current17.xml-p006025001p007524997.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current18.xml-p007525002p009225000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current19.xml-p009225001p011125000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current20.xml-p011125001p013324998.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current21.xml-p013325001p015725000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current22.xml-p015725003p018225000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current23.xml-p018225001p020925000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current24.xml-p020925002p023725000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current25.xml-p023725001p026624999.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current26.xml-p026625002p029625000.bz2
# wget -c http://dumps.wikimedia.org/enwiki/20140203/enwiki-20140203-pages-meta-current27.xml-p029625001p041836446.bz2
software/hadoop/template/bin/run-demo.sh.in
deleted
100644 → 0
View file @
b9f5d141
#!/bin/bash
.
environment.sh
hadoop jar software_release/parts/hadoop-streaming/
*
jar
-mapper
gutenberg-mapper.py
-reducer
gutenberg-reducer.py
-input
gutenberg/
*
-output
gutenberg-output
stack/hadoop.cfg
deleted
100644 → 0
View file @
b9f5d141
[buildout]
extends =
../component/java/buildout.cfg
../stack/slapos.cfg
parts =
java
stack/hadoop/buildout.cfg
0 → 100644
View file @
e80853a2
[buildout]
extends =
../../stack/slapos.cfg
../../component/java/buildout.cfg
parts =
slapos-cookbook
eggs
java
hadoop
hadoop-streaming
instance-stack
[eggs]
recipe = zc.recipe.egg
eggs =
slapos.cookbook
collective.recipe.template
cp.recipe.cmd
plone.recipe.command
[hadoop]
recipe = hexagonit.recipe.download
filename = hadoop-2.2.0.tar.gz
url = http://apache.mirrors.spacedump.net/hadoop/common/stable/${:filename}
md5sum = 25f27eb0b5617e47c032319c0bfd9962
download-only = true
mode = 0644
strip-top-level-dir = true
[hadoop-streaming]
recipe = hexagonit.recipe.download
url = http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-streaming/0.20.203.0/hadoop-streaming-0.20.203.0.jar
download-only = true
#md5sum =
mode = 0644
[instance-stack]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-stack.cfg.in
output = ${buildout:directory}/instance-stack.cfg
# md5sum =
mode = 0644
s
oftware/hadoop/instance
.cfg.in
→
s
tack/hadoop/instance-stack
.cfg.in
View file @
e80853a2
...
...
@@ -2,10 +2,6 @@
parts =
sh-environment
put-files
mapper
reducer
run-demo
start-daemons
deploy-tar
...
...
@@ -36,48 +32,16 @@ command =
[ -d $${directories:hadoop-prefix}/bin} ] || tar xf ${hadoop:location}/${hadoop:filename} -C $${directories:hadoop-prefix} --strip-components=1
[directories]
recipe = slapos.cookbook:mkdirectory
bin = $${buildout:directory}/bin
etc = $${buildout:directory}/etc
var = $${buildout:directory}/var
hadoop-prefix = $${buildout:directory}/hadoop
services = $${directories:etc}/service
promises = $${directories:etc}/promise
[put-files]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template/bin/put-files.sh.in
output = $${directories:bin}/put-files.sh
# md5sum =
mode = 0755
# http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
[mapper]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template/bin/gutenberg-mapper.py.in
output = $${directories:bin}/gutenberg-mapper.py
# md5sum =
mode = 0755
[reducer]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template/bin/gutenberg-reducer.py.in
output = $${directories:bin}/gutenberg-reducer.py
# md5sum =
mode = 0755
[run-demo]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template/bin/run-demo.sh.in
output = $${directories:bin}/run-demo.sh
# md5sum =
mode = 0755
[start-daemons]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template/bin/start-daemons.sh.in
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment