31
edits
mNo edit summary |
mNo edit summary |
||
| (3 intermediate revisions by 2 users not shown) | |||
| Line 43: | Line 43: | ||
# For example: | # For example: | ||
#rclone mount c3g-data-repos:ihec_data /mnt/ihec_data_objstr --daemon --daemon-wait 0 --allow-other --read-only | #rclone mount c3g-data-repos:ihec_data /mnt/ihec_data_objstr --daemon --daemon-wait 0 --allow-other --read-only | ||
</syntaxhighlight>A service may be used to auto-mount the Object Store on boot with a service file (in /etc/systemd/system/).<syntaxhighlight lang="bash"> | </syntaxhighlight>Unmount with: | ||
<code>fusermount -u /path/to/local/mount</code> | |||
A service may be used to auto-mount the Object Store on boot with a service file (in /etc/systemd/system/).<syntaxhighlight lang="bash"> | |||
# Mount the ihec_data_objstr, even after a restart | # Mount the ihec_data_objstr, even after a restart | ||
[Unit] | [Unit] | ||
| Line 85: | Line 90: | ||
== No problems, only solutions == | == No problems, only solutions == | ||
=== I cannot upload a file larger than 48GB. === | |||
In some situations, rclone is not able to guess the size of the file to upload and use the default value of <code>--s3-chunk-size 5M</code> to split and upload the file to the bucket. But since the server has a 10,000 chunk limit, the upload crashes. | |||
You can solve that by setting a larger value: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
rclone copy --s3-chunk-size 50M my-large-file.cram my-project:test | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Note that you need the ram of your computer to be larger | Another way is to lower the maximum number of parts in a multipart upload using [https://rclone.org/s3/#s3-max-upload-parts --s3-max-upload-parts], for example: <code>--s3-max-upload-parts 1000</code>. | ||
Note that you need the ram of your computer to be larger than chunks. | |||
edits