- 14 Oct, 2020 4 commits
-
-
Leo Le Bouter authored
The module is unsafe because it collects metadata while being writable for files on the same system; It therefore cannot be relied upon for security purposes. It will however allow to use the agent for purposes of better system changes transparency where malicious changes are not expected.
-
Leo Le Bouter authored
-
Leo Le Bouter authored
-
Leo Le Bouter authored
-
- 30 Sep, 2020 2 commits
-
-
Leo Le Bouter authored
UEFI standard asks that UEFI applications are located in an /EFI/vendor folder. We therefore place them within /EFI/Nexedi inside the EFI System Partition.
-
Leo Le Bouter authored
-
- 15 Sep, 2020 3 commits
-
-
Leo Le Bouter authored
Put our UEFI boot application just before the currently booted element in the list.
-
Leo Le Bouter authored
-
Leo Le Bouter authored
-
- 14 Sep, 2020 1 commit
-
-
Leo Le Bouter authored
The Python msgpack library does not deserialize MsgPack data created with Rust's rmp-serde well. Upload to Computer Metadata Snapshot module recently created for ERP5.
-
- 08 Sep, 2020 10 commits
-
-
Leo Le Bouter authored
Currently GNU Guix does not support private git repos, so the origin is only a placeholder until it works. The package otherwise does work when supplying the origin from a local or public URL.
-
Leo Le Bouter authored
-
Leo Le Bouter authored
Also, only btrfs needs mountpoint if filesystem is mounted.
-
Leo Le Bouter authored
-
Leo Le Bouter authored
The .gitkeep file was getting into the debian package causing spurious errors during install.
-
Leo Le Bouter authored
-
Leo Le Bouter authored
-
Leo Le Bouter authored
Using mountpoint instead of device is required to change the label of a currently mounted filesystem.
-
Leo Le Bouter authored
-
Leo Le Bouter authored
-
- 21 Aug, 2020 2 commits
-
-
Leo Le Bouter authored
To install the dracut module on your current system, change into the dracut.module directory then run: ``` $ ERP5_USER="user" ERP5_PASS="pass" \ ERP5_BASE_URL="https://example.local/erp5" \ make $ sudo make install ``` To uninstall: ``` $ sudo make uninstall ``` Then in a dracut.conf file, to include it you can add: ``` add_dracutmodules="metadata-collect" ``` You will also need to append "ip=dhcp rd.neednet=1" to the kernel_cmdline directive inside the dracut.conf so that the initramfs requests networking for the agent to upload results. Make sure the dracut network modules are installed, on Debian that is the dracut-network package. You can otherwise check their presence using: ``` $ dracut --list-modules | grep network ``` There you should see a few modules.
-
Leo Le Bouter authored
With rustls it's easier to embed the root CA certificates inside the compiled binary itself using the webpki-roots crate. We need to do this because it's the easiest way of getting TLS certificate validation working inside the initramfs where /etc/ssl/certs or else does not exist.
-
- 20 Aug, 2020 1 commit
-
-
Leo Le Bouter authored
In contradiction with Jean-Paul's guidelines on not using Rust due to lack of knowledge about it inside Nexedi, I am using it here because it is the fastest way for me to get a working standalone static binary, I know that language best. Considering we must be getting results ASAP, this is the best strategy for me. We may later rewrite it in another language if necessary. A shell script is included to build the static binary, you need to install rustup to get rust for musl, an alternative libc that allows to create real static binaries that embed libc itself too. Rustup can be found at: https://rustup.rs/ You can get a musl toolchain with: $ rustup target add x86_64-unknown-linux-musl The acl library is being downloaded and built as a static library by the script, and the rust build system will also build a vendored copy of openssl as a static library. Parallel hashing is done a bit differently in that Rust version, only files contained in the currently processed directories will be hashed in parallel. If there is a single big file in a directory hashing will be stuck on that file until it's done and it goes onto the next directory. To clarify, each file is only hashed on a single thread, the Python version also does this, it just keeps the number of files being hashed in parallel to a constant number as long as there is more files to process, this version will only hash with one thread per file in the currently processed directory. It was done that way for sake of simplicity but we can implement an offload threadpool to mimick what was done in Python later on.
-
- 19 Aug, 2020 1 commit
-
-
Leo Le Bouter authored
-
- 18 Aug, 2020 2 commits
-
-
Leo Le Bouter authored
-
Leo Le Bouter authored
TODO: Find a way to properly increment version without having to store any additional state client-side TODO: Investigate using HATEOAS to talk to ERP5 TODO: Investigate using TLS client certificates to authenticate, they would be stored in /boot and would prevent the machine from booting if they were invalid or missing so that tampering with them is not interesting for an attacker. Also, the certificate's Common Name should be the computer reference and therefore should be used to construct the metadata snapshot document's reference instead of having to specify it on the command line.
-
- 14 Aug, 2020 2 commits
-
-
Leo Le Bouter authored
-
Leo Le Bouter authored
* Convert stat_result to proper dictionary so that field names are retained after serialization * Add ability to ignore directories through command line arguments, explicitly add "ignored" field on ignored directories It was decided that JSON was not a suitable format because bytes serialization support is lacking. MsgPack supports it and is more efficient, also it is the internal serialization format for Fluentd which we will most probably use for ingesting data in a central place.
-
- 13 Aug, 2020 3 commits
-
-
Leo Le Bouter authored
multiprocessing.Pool.close() ensures no new tasks can be submitted to the pool and waits for them to all finish. Even though AsyncResult.get() also waits for the tasks to finish, and our code structure shouldnt submit new tasks at that point, close() first, get() then. In the future this could be error-prone in the future where mp_tasks is modified while results are being merged back and we miss some results because the iterator wont take these new items into account *during* iteration.
-
Leo Le Bouter authored
-
Leo Le Bouter authored
In Python, the JSON encoder cannot process bytes, the JSON specification also does not define a "bytes" type. We are constrained by this in that we cannot serialize data of bytes type. xattrs can be either strings or bytes, in practice they're likely representable as strings, therefore, decode as utf-8, error otherwise. If real world situation of xattrs in true binary format arise then we will rule out another solution.
-
- 12 Aug, 2020 1 commit
-
-
Leo Le Bouter authored
-