mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2025-04-08 19:34:18 +00:00
Merge branch 'master' into ticket3252-port-web-directory.remaining.1
This commit is contained in:
commit
c385e958a8
@ -1,4 +1,4 @@
|
||||
FROM pypy:2.7-7.1.1-jessie
|
||||
FROM pypy:2.7-buster
|
||||
|
||||
ENV WHEELHOUSE_PATH /tmp/wheelhouse
|
||||
ENV VIRTUALENV_PATH /tmp/venv
|
||||
|
@ -27,8 +27,8 @@ workflows:
|
||||
|
||||
- "nixos-19.09"
|
||||
|
||||
# Test against PyPy 2.7/7.1.1
|
||||
- "pypy2.7-7.1"
|
||||
# Test against PyPy 2.7
|
||||
- "pypy2.7-buster"
|
||||
|
||||
# Other assorted tasks and configurations
|
||||
- "lint"
|
||||
@ -69,7 +69,7 @@ workflows:
|
||||
- "build-image-fedora-29"
|
||||
- "build-image-centos-8"
|
||||
- "build-image-slackware-14.2"
|
||||
- "build-image-pypy-2.7-7.1.1-jessie"
|
||||
- "build-image-pypy-2.7-buster"
|
||||
|
||||
|
||||
jobs:
|
||||
@ -198,10 +198,10 @@ jobs:
|
||||
user: "nobody"
|
||||
|
||||
|
||||
pypy2.7-7.1:
|
||||
pypy2.7-buster:
|
||||
<<: *DEBIAN
|
||||
docker:
|
||||
- image: "tahoelafsci/pypy:2.7-7.1.1-jessie"
|
||||
- image: "tahoelafsci/pypy:2.7-buster"
|
||||
user: "nobody"
|
||||
|
||||
environment:
|
||||
@ -513,9 +513,9 @@ jobs:
|
||||
TAG: "14.2"
|
||||
|
||||
|
||||
build-image-pypy-2.7-7.1.1-jessie:
|
||||
build-image-pypy-2.7-buster:
|
||||
<<: *BUILD_IMAGE
|
||||
|
||||
environment:
|
||||
DISTRO: "pypy"
|
||||
TAG: "2.7-7.1.1-jessie"
|
||||
TAG: "2.7-buster"
|
||||
|
2
.github/workflows/ci.yml
vendored
2
.github/workflows/ci.yml
vendored
@ -2,6 +2,8 @@ name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- "master"
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
|
1
.gitignore
vendored
1
.gitignore
vendored
@ -43,7 +43,6 @@ zope.interface-*.egg
|
||||
/.tox/
|
||||
/docs/_build/
|
||||
/coverage.xml
|
||||
/smoke_magicfolder/
|
||||
/.hypothesis/
|
||||
|
||||
# This is the plaintext of the private environment needed for some CircleCI
|
||||
|
6
Makefile
6
Makefile
@ -42,12 +42,6 @@ upload-osx-pkg:
|
||||
# echo not uploading tahoe-lafs-osx-pkg because this is not trunk but is branch \"${BB_BRANCH}\" ; \
|
||||
# fi
|
||||
|
||||
.PHONY: smoketest
|
||||
smoketest:
|
||||
-python ./src/allmydata/test/check_magicfolder_smoke.py kill
|
||||
-rm -rf smoke_magicfolder/
|
||||
python ./src/allmydata/test/check_magicfolder_smoke.py
|
||||
|
||||
# code coverage-based testing is disabled temporarily, as we switch to tox.
|
||||
# This will eventually be added to a tox environment. The following comments
|
||||
# and variable settings are retained as notes for that future effort.
|
||||
|
@ -82,7 +82,6 @@ Client/server nodes provide one or more of the following services:
|
||||
* web-API service
|
||||
* SFTP service
|
||||
* FTP service
|
||||
* Magic Folder service
|
||||
* helper service
|
||||
* storage service.
|
||||
|
||||
@ -719,12 +718,6 @@ SFTP, FTP
|
||||
for instructions on configuring these services, and the ``[sftpd]`` and
|
||||
``[ftpd]`` sections of ``tahoe.cfg``.
|
||||
|
||||
Magic Folder
|
||||
|
||||
A node running on Linux or Windows can be configured to automatically
|
||||
upload files that are created or changed in a specified local directory.
|
||||
See :doc:`frontends/magic-folder` for details.
|
||||
|
||||
|
||||
Storage Server Configuration
|
||||
============================
|
||||
|
@ -1,148 +0,0 @@
|
||||
.. -*- coding: utf-8-with-signature -*-
|
||||
|
||||
================================
|
||||
Tahoe-LAFS Magic Folder Frontend
|
||||
================================
|
||||
|
||||
1. `Introduction`_
|
||||
2. `Configuration`_
|
||||
3. `Known Issues and Limitations With Magic-Folder`_
|
||||
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
The Magic Folder frontend synchronizes local directories on two or more
|
||||
clients, using a Tahoe-LAFS grid for storage. Whenever a file is created
|
||||
or changed under the local directory of one of the clients, the change is
|
||||
propagated to the grid and then to the other clients.
|
||||
|
||||
The implementation of the "drop-upload" frontend, on which Magic Folder is
|
||||
based, was written as a prototype at the First International Tahoe-LAFS
|
||||
Summit in June 2011. In 2015, with the support of a grant from the
|
||||
`Open Technology Fund`_, it was redesigned and extended to support
|
||||
synchronization between clients. It currently works on Linux and Windows.
|
||||
|
||||
Magic Folder is not currently in as mature a state as the other frontends
|
||||
(web, CLI, SFTP and FTP). This means that you probably should not rely on
|
||||
all changes to files in the local directory to result in successful uploads.
|
||||
There might be (and have been) incompatible changes to how the feature is
|
||||
configured.
|
||||
|
||||
We are very interested in feedback on how well this feature works for you, and
|
||||
suggestions to improve its usability, functionality, and reliability.
|
||||
|
||||
.. _`Open Technology Fund`: https://www.opentech.fund/
|
||||
|
||||
|
||||
Configuration
|
||||
=============
|
||||
|
||||
The Magic Folder frontend runs as part of a gateway node. To set it up, you
|
||||
must use the tahoe magic-folder CLI. For detailed information see our
|
||||
:doc:`Magic-Folder CLI design
|
||||
documentation<../proposed/magic-folder/user-interface-design>`. For a
|
||||
given Magic-Folder collective directory you need to run the ``tahoe
|
||||
magic-folder create`` command. After that the ``tahoe magic-folder invite``
|
||||
command must used to generate an *invite code* for each member of the
|
||||
magic-folder collective. A confidential, authenticated communications channel
|
||||
should be used to transmit the invite code to each member, who will be
|
||||
joining using the ``tahoe magic-folder join`` command.
|
||||
|
||||
These settings are persisted in the ``[magic_folder]`` section of the
|
||||
gateway's ``tahoe.cfg`` file.
|
||||
|
||||
``[magic_folder]``
|
||||
|
||||
``enabled = (boolean, optional)``
|
||||
|
||||
If this is ``True``, Magic Folder will be enabled. The default value is
|
||||
``False``.
|
||||
|
||||
``local.directory = (UTF-8 path)``
|
||||
|
||||
This specifies the local directory to be monitored for new or changed
|
||||
files. If the path contains non-ASCII characters, it should be encoded
|
||||
in UTF-8 regardless of the system's filesystem encoding. Relative paths
|
||||
will be interpreted starting from the node's base directory.
|
||||
|
||||
You should not normally need to set these fields manually because they are
|
||||
set by the ``tahoe magic-folder create`` and/or ``tahoe magic-folder join``
|
||||
commands. Use the ``--help`` option to these commands for more information.
|
||||
|
||||
After setting up a Magic Folder collective and starting or restarting each
|
||||
gateway, you can confirm that the feature is working by copying a file into
|
||||
any local directory, and checking that it appears on other clients.
|
||||
Large files may take some time to appear.
|
||||
|
||||
The 'Operational Statistics' page linked from the Welcome page shows counts
|
||||
of the number of files uploaded, the number of change events currently
|
||||
queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
|
||||
page and the node :doc:`log<../logging>` may be helpful to determine the
|
||||
cause of any failures.
|
||||
|
||||
|
||||
.. _Known Issues in Magic-Folder:
|
||||
|
||||
Known Issues and Limitations With Magic-Folder
|
||||
==============================================
|
||||
|
||||
This feature only works on Linux and Windows. There is a ticket to add
|
||||
support for Mac OS X and BSD-based systems (`#1432`_).
|
||||
|
||||
The only way to determine whether uploads have failed is to look at the
|
||||
'Operational Statistics' page linked from the Welcome page. This only shows
|
||||
a count of failures, not the names of files. Uploads are never retried.
|
||||
|
||||
The Magic Folder frontend performs its uploads sequentially (i.e. it waits
|
||||
until each upload is finished before starting the next), even when there
|
||||
would be enough memory and bandwidth to efficiently perform them in parallel.
|
||||
A Magic Folder upload can occur in parallel with an upload by a different
|
||||
frontend, though. (`#1459`_)
|
||||
|
||||
On Linux, if there are a large number of near-simultaneous file creation or
|
||||
change events (greater than the number specified in the file
|
||||
``/proc/sys/fs/inotify/max_queued_events``), it is possible that some events
|
||||
could be missed. This is fairly unlikely under normal circumstances, because
|
||||
the default value of ``max_queued_events`` in most Linux distributions is
|
||||
16384, and events are removed from this queue immediately without waiting for
|
||||
the corresponding upload to complete. (`#1430`_)
|
||||
|
||||
The Windows implementation might also occasionally miss file creation or
|
||||
change events, due to limitations of the underlying Windows API
|
||||
(ReadDirectoryChangesW). We do not know how likely or unlikely this is.
|
||||
(`#1431`_)
|
||||
|
||||
Some filesystems may not support the necessary change notifications.
|
||||
So, it is recommended for the local directory to be on a directly attached
|
||||
disk-based filesystem, not a network filesystem or one provided by a virtual
|
||||
machine.
|
||||
|
||||
The ``private/magic_folder_dircap`` and ``private/collective_dircap`` files
|
||||
cannot use an alias or path to specify the upload directory. (`#1711`_)
|
||||
|
||||
If a file in the upload directory is changed (actually relinked to a new
|
||||
file), then the old file is still present on the grid, and any other caps
|
||||
to it will remain valid. Eventually it will be possible to use
|
||||
:doc:`../garbage-collection` to reclaim the space used by these files; however
|
||||
currently they are retained indefinitely. (`#2440`_)
|
||||
|
||||
Unicode filenames are supported on both Linux and Windows, but on Linux, the
|
||||
local name of a file must be encoded correctly in order for it to be uploaded.
|
||||
The expected encoding is that printed by
|
||||
``python -c "import sys; print sys.getfilesystemencoding()"``.
|
||||
|
||||
On Windows, local directories with non-ASCII names are not currently working.
|
||||
(`#2219`_)
|
||||
|
||||
On Windows, when a node has Magic Folder enabled, it is unresponsive to Ctrl-C
|
||||
(it can only be killed using Task Manager or similar). (`#2218`_)
|
||||
|
||||
.. _`#1430`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1430
|
||||
.. _`#1431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1431
|
||||
.. _`#1432`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1432
|
||||
.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
|
||||
.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
|
||||
.. _`#2218`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2218
|
||||
.. _`#2219`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2219
|
||||
.. _`#2440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2440
|
@ -20,7 +20,6 @@ Contents:
|
||||
frontends/CLI
|
||||
frontends/webapi
|
||||
frontends/FTP-and-SFTP
|
||||
frontends/magic-folder
|
||||
frontends/download-status
|
||||
|
||||
known_issues
|
||||
@ -37,7 +36,6 @@ Contents:
|
||||
expenses
|
||||
cautions
|
||||
write_coordination
|
||||
magic-folder-howto
|
||||
backupdb
|
||||
|
||||
anonymity-configuration
|
||||
|
@ -1,176 +0,0 @@
|
||||
.. _magic-folder-howto:
|
||||
|
||||
=========================
|
||||
Magic Folder Set-up Howto
|
||||
=========================
|
||||
|
||||
#. `This document`_
|
||||
#. `Setting up a local test grid`_
|
||||
#. `Setting up Magic Folder`_
|
||||
#. `Testing`_
|
||||
|
||||
|
||||
This document
|
||||
=============
|
||||
|
||||
This is preliminary documentation of how to set up Magic Folder using a test
|
||||
grid on a single Linux or Windows machine, with two clients and one server.
|
||||
It is aimed at a fairly technical audience.
|
||||
|
||||
For an introduction to Magic Folder and how to configure it
|
||||
more generally, see :doc:`frontends/magic-folder`.
|
||||
|
||||
It it possible to adapt these instructions to run the nodes on
|
||||
different machines, to synchronize between three or more clients,
|
||||
to mix Windows and Linux clients, and to use multiple servers
|
||||
(if the Tahoe-LAFS encoding parameters are changed).
|
||||
|
||||
|
||||
Setting up a local test grid
|
||||
============================
|
||||
|
||||
Linux
|
||||
-----
|
||||
|
||||
Run these commands::
|
||||
|
||||
mkdir ../grid
|
||||
bin/tahoe create-introducer ../grid/introducer
|
||||
bin/tahoe start ../grid/introducer
|
||||
export FURL=`cat ../grid/introducer/private/introducer.furl`
|
||||
bin/tahoe create-node --introducer="$FURL" ../grid/server
|
||||
bin/tahoe create-client --introducer="$FURL" ../grid/alice
|
||||
bin/tahoe create-client --introducer="$FURL" ../grid/bob
|
||||
|
||||
|
||||
Windows
|
||||
-------
|
||||
|
||||
Run::
|
||||
|
||||
mkdir ..\grid
|
||||
bin\tahoe create-introducer ..\grid\introducer
|
||||
bin\tahoe start ..\grid\introducer
|
||||
|
||||
Leave the introducer running in that Command Prompt,
|
||||
and in a separate Command Prompt (with the same current
|
||||
directory), run::
|
||||
|
||||
set /p FURL=<..\grid\introducer\private\introducer.furl
|
||||
bin\tahoe create-node --introducer=%FURL% ..\grid\server
|
||||
bin\tahoe create-client --introducer=%FURL% ..\grid\alice
|
||||
bin\tahoe create-client --introducer=%FURL% ..\grid\bob
|
||||
|
||||
|
||||
Both Linux and Windows
|
||||
----------------------
|
||||
|
||||
(Replace ``/`` with ``\`` for Windows paths.)
|
||||
|
||||
Edit ``../grid/alice/tahoe.cfg``, and make the following
|
||||
changes to the ``[node]`` and ``[client]`` sections::
|
||||
|
||||
[node]
|
||||
nickname = alice
|
||||
web.port = tcp:3457:interface=127.0.0.1
|
||||
|
||||
[client]
|
||||
shares.needed = 1
|
||||
shares.happy = 1
|
||||
shares.total = 1
|
||||
|
||||
Edit ``../grid/bob/tahoe.cfg``, and make the following
|
||||
change to the ``[node]`` section, and the same change as
|
||||
above to the ``[client]`` section::
|
||||
|
||||
[node]
|
||||
nickname = bob
|
||||
web.port = tcp:3458:interface=127.0.0.1
|
||||
|
||||
Note that when running nodes on a single machine,
|
||||
unique port numbers must be used for each node (and they
|
||||
must not clash with ports used by other server software).
|
||||
Here we have used the default of 3456 for the server,
|
||||
3457 for alice, and 3458 for bob.
|
||||
|
||||
Now start all of the nodes (the introducer should still be
|
||||
running from above)::
|
||||
|
||||
bin/tahoe start ../grid/server
|
||||
bin/tahoe start ../grid/alice
|
||||
bin/tahoe start ../grid/bob
|
||||
|
||||
On Windows, a separate Command Prompt is needed to run each
|
||||
node.
|
||||
|
||||
Open a web browser on http://127.0.0.1:3457/ and verify that
|
||||
alice is connected to the introducer and one storage server.
|
||||
Then do the same for http://127.0.0.1:3568/ to verify that
|
||||
bob is connected. Leave all of the nodes running for the
|
||||
next stage.
|
||||
|
||||
|
||||
Setting up Magic Folder
|
||||
=======================
|
||||
|
||||
Linux
|
||||
-----
|
||||
|
||||
Run::
|
||||
|
||||
mkdir -p ../local/alice ../local/bob
|
||||
bin/tahoe -d ../grid/alice magic-folder create magic: alice ../local/alice
|
||||
bin/tahoe -d ../grid/alice magic-folder invite magic: bob >invitecode
|
||||
export INVITECODE=`cat invitecode`
|
||||
bin/tahoe -d ../grid/bob magic-folder join "$INVITECODE" ../local/bob
|
||||
|
||||
bin/tahoe restart ../grid/alice
|
||||
bin/tahoe restart ../grid/bob
|
||||
|
||||
Windows
|
||||
-------
|
||||
|
||||
Run::
|
||||
|
||||
mkdir ..\local\alice ..\local\bob
|
||||
bin\tahoe -d ..\grid\alice magic-folder create magic: alice ..\local\alice
|
||||
bin\tahoe -d ..\grid\alice magic-folder invite magic: bob >invitecode
|
||||
set /p INVITECODE=<invitecode
|
||||
bin\tahoe -d ..\grid\bob magic-folder join %INVITECODE% ..\local\bob
|
||||
|
||||
Then close the Command Prompt windows that are running the alice and bob
|
||||
nodes, and open two new ones in which to run::
|
||||
|
||||
bin\tahoe start ..\grid\alice
|
||||
bin\tahoe start ..\grid\bob
|
||||
|
||||
|
||||
Testing
|
||||
=======
|
||||
|
||||
You can now experiment with creating files and directories in
|
||||
``../local/alice`` and ``/local/bob``; any changes should be
|
||||
propagated to the other directory.
|
||||
|
||||
Note that when a file is deleted, the corresponding file in the
|
||||
other directory will be renamed to a filename ending in ``.backup``.
|
||||
Deleting a directory will have no effect.
|
||||
|
||||
For other known issues and limitations, see :ref:`Known Issues in
|
||||
Magic-Folder`.
|
||||
|
||||
As mentioned earlier, it is also possible to run the nodes on
|
||||
different machines, to synchronize between three or more clients,
|
||||
to mix Windows and Linux clients, and to use multiple servers
|
||||
(if the Tahoe-LAFS encoding parameters are changed).
|
||||
|
||||
|
||||
Configuration
|
||||
=============
|
||||
|
||||
There will be a ``[magic_folder]`` section in your ``tahoe.cfg`` file
|
||||
after setting up Magic Folder.
|
||||
|
||||
There is an option you can add to this called ``poll_interval=`` to
|
||||
control how often (in seconds) the Downloader will check for new things
|
||||
to download.
|
@ -19,9 +19,7 @@ Invites and Joins
|
||||
|
||||
Inside Tahoe-LAFS we are using a channel created using `magic
|
||||
wormhole`_ to exchange configuration and the secret fURL of the
|
||||
Introducer with new clients. In the future, we would like to make the
|
||||
Magic Folder (:ref:`Magic Folder HOWTO <magic-folder-howto>`) invites and joins work this way
|
||||
as well.
|
||||
Introducer with new clients.
|
||||
|
||||
This is a two-part process. Alice runs a grid and wishes to have her
|
||||
friend Bob use it as a client. She runs ``tahoe invite bob`` which
|
||||
|
@ -14,8 +14,4 @@ index only lists the files that are in .rst format.
|
||||
:maxdepth: 2
|
||||
|
||||
leasedb
|
||||
magic-folder/filesystem-integration
|
||||
magic-folder/remote-to-local-sync
|
||||
magic-folder/user-interface-design
|
||||
magic-folder/multi-party-conflict-detection
|
||||
http-storage-node-protocol
|
||||
|
@ -1,118 +0,0 @@
|
||||
Magic Folder local filesystem integration design
|
||||
================================================
|
||||
|
||||
*Scope*
|
||||
|
||||
This document describes how to integrate the local filesystem with Magic
|
||||
Folder in an efficient and reliable manner. For now we ignore Remote to
|
||||
Local synchronization; the design and implementation of this is scheduled
|
||||
for a later time. We also ignore multiple writers for the same Magic
|
||||
Folder, which may or may not be supported in future. The design here will
|
||||
be updated to account for those features in later Objectives. Objective 3
|
||||
may require modifying the database schema or operation, and Objective 5
|
||||
may modify the User interface.
|
||||
|
||||
Tickets on the Tahoe-LAFS trac with the `otf-magic-folder-objective2`_
|
||||
keyword are within the scope of the local filesystem integration for
|
||||
Objective 2.
|
||||
|
||||
.. _otf-magic-folder-objective2: https://tahoe-lafs.org/trac/tahoe-lafs/query?status=!closed&keywords=~otf-magic-folder-objective2
|
||||
|
||||
.. _filesystem_integration-local-scanning-and-database:
|
||||
|
||||
*Local scanning and database*
|
||||
|
||||
When a Magic-Folder-enabled node starts up, it scans all directories
|
||||
under the local directory and adds every file to a first-in first-out
|
||||
"scan queue". When processing the scan queue, redundant uploads are
|
||||
avoided by using the same mechanism the Tahoe backup command uses: we
|
||||
keep track of previous uploads by recording each file's metadata such as
|
||||
size, ``ctime`` and ``mtime``. This information is stored in a database,
|
||||
referred to from now on as the magic folder db. Using this recorded
|
||||
state, we ensure that when Magic Folder is subsequently started, the
|
||||
local directory tree can be scanned quickly by comparing current
|
||||
filesystem metadata with the previously recorded metadata. Each file
|
||||
referenced in the scan queue is uploaded only if its metadata differs at
|
||||
the time it is processed. If a change event is detected for a file that
|
||||
is already queued (and therefore will be processed later), the redundant
|
||||
event is ignored.
|
||||
|
||||
To implement the magic folder db, we will use an SQLite schema that
|
||||
initially is the existing Tahoe-LAFS backup schema. This schema may
|
||||
change in later objectives; this will cause no backward compatibility
|
||||
problems, because this new feature will be developed on a branch that
|
||||
makes no compatibility guarantees. However we will have a separate SQLite
|
||||
database file and separate mutex lock just for Magic Folder. This avoids
|
||||
usability problems related to mutual exclusion. (If a single file and
|
||||
lock were used, a backup would block Magic Folder updates for a long
|
||||
time, and a user would not be able to tell when backups are possible
|
||||
because Magic Folder would acquire a lock at arbitrary times.)
|
||||
|
||||
|
||||
*Eventual consistency property*
|
||||
|
||||
During the process of reading a file in order to upload it, it is not
|
||||
possible to prevent further local writes. Such writes will result in
|
||||
temporary inconsistency (that is, the uploaded file will not reflect
|
||||
what the contents of the local file were at any specific time). Eventual
|
||||
consistency is reached when the queue of pending uploads is empty. That
|
||||
is, a consistent snapshot will be achieved eventually when local writes
|
||||
to the target folder cease for a sufficiently long period of time.
|
||||
|
||||
|
||||
*Detecting filesystem changes*
|
||||
|
||||
For the Linux implementation, we will use the `inotify`_ Linux kernel
|
||||
subsystem to gather events on the local Magic Folder directory tree. This
|
||||
implementation was already present in Tahoe-LAFS 1.9.0, but needs to be
|
||||
changed to gather directory creation and move events, in addition to the
|
||||
events indicating that a file has been written that are gathered by the
|
||||
current code.
|
||||
|
||||
.. _`inotify`: https://en.wikipedia.org/wiki/Inotify
|
||||
|
||||
For the Windows implementation, we will use the ``ReadDirectoryChangesW``
|
||||
Win32 API. The prototype implementation simulates a Python interface to
|
||||
the inotify API in terms of ``ReadDirectoryChangesW``, allowing most of
|
||||
the code to be shared across platforms.
|
||||
|
||||
The alternative of using `NTFS Change Journals`_ for Windows was
|
||||
considered, but appears to be more complicated and does not provide any
|
||||
additional functionality over the scanning approach described above.
|
||||
The Change Journal mechanism is also only available for NTFS filesystems,
|
||||
but FAT32 filesystems are still common in user installations of Windows.
|
||||
|
||||
.. _`NTFS Change Journals`: https://msdn.microsoft.com/en-us/library/aa363803%28VS.85%29.aspx
|
||||
|
||||
When we detect the creation of a new directory below the local Magic
|
||||
Folder directory, we create it in the Tahoe-LAFS filesystem, and also
|
||||
scan the new local directory for new files. This scan is necessary to
|
||||
avoid missing events for creation of files in a new directory before it
|
||||
can be watched, and to correctly handle cases where an existing directory
|
||||
is moved to be under the local Magic Folder directory.
|
||||
|
||||
|
||||
*User interface*
|
||||
|
||||
The Magic Folder local filesystem integration will initially have a
|
||||
provisional configuration file-based interface that may not be ideal from
|
||||
a usability perspective. Creating our local filesystem integration in
|
||||
this manner will allow us to use and test it independently of the rest of
|
||||
the Magic Folder software components. We will focus greater attention on
|
||||
user interface design as a later milestone in our development roadmap.
|
||||
|
||||
The configuration file, ``tahoe.cfg``, must define a target local
|
||||
directory to be synchronized. Provisionally, this configuration will
|
||||
replace the current ``[drop_upload]`` section::
|
||||
|
||||
[magic_folder]
|
||||
enabled = true
|
||||
local.directory = "/home/human"
|
||||
|
||||
When a filesystem directory is first configured for Magic Folder, the user
|
||||
needs to create the remote Tahoe-LAFS directory using ``tahoe mkdir``,
|
||||
and configure the Magic-Folder-enabled node with its URI (e.g. by putting
|
||||
it in a file ``private/magic_folder_dircap``). If there are existing
|
||||
files in the local directory, they will be uploaded as a result of the
|
||||
initial scan described earlier.
|
||||
|
@ -1,373 +0,0 @@
|
||||
Multi-party Conflict Detection
|
||||
==============================
|
||||
|
||||
The current Magic-Folder remote conflict detection design does not properly detect remote conflicts
|
||||
for groups of three or more parties. This design is specified in the "Fire Dragon" section of this document:
|
||||
https://github.com/tahoe-lafs/tahoe-lafs/blob/2551.wip.2/docs/proposed/magic-folder/remote-to-local-sync.rst#fire-dragons-distinguishing-conflicts-from-overwrites
|
||||
|
||||
This Tahoe-LAFS trac ticket comment outlines a scenario with
|
||||
three parties in which a remote conflict is falsely detected:
|
||||
|
||||
.. _`ticket comment`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2551#comment:22
|
||||
|
||||
|
||||
Summary and definitions
|
||||
=======================
|
||||
|
||||
Abstract file: a file being shared by a Magic Folder.
|
||||
|
||||
Local file: a file in a client's local filesystem corresponding to an abstract file.
|
||||
|
||||
Relative path: the path of an abstract or local file relative to the Magic Folder root.
|
||||
|
||||
Version: a snapshot of an abstract file, with associated metadata, that is uploaded by a Magic Folder client.
|
||||
|
||||
A version is associated with the file's relative path, its contents, and
|
||||
mtime and ctime timestamps. Versions also have a unique identity.
|
||||
|
||||
Follows relation:
|
||||
* If and only if a change to a client's local file at relative path F that results in an upload of version V',
|
||||
was made when the client already had version V of that file, then we say that V' directly follows V.
|
||||
* The follows relation is the irreflexive transitive closure of the "directly follows" relation.
|
||||
|
||||
The follows relation is transitive and acyclic, and therefore defines a DAG called the
|
||||
Version DAG. Different abstract files correspond to disconnected sets of nodes in the Version DAG
|
||||
(in other words there are no "follows" relations between different files).
|
||||
|
||||
The DAG is only ever extended, not mutated.
|
||||
|
||||
The desired behaviour for initially classifying overwrites and conflicts is as follows:
|
||||
|
||||
* if a client Bob currently has version V of a file at relative path F, and it sees a new version V'
|
||||
of that file in another client Alice's DMD, such that V' follows V, then the write of the new version
|
||||
is initially an overwrite and should be to the same filename.
|
||||
* if, in the same situation, V' does not follow V, then the write of the new version should be
|
||||
classified as a conflict.
|
||||
|
||||
The existing :doc:`remote-to-local-sync` document defines when an initial
|
||||
overwrite should be reclassified as a conflict.
|
||||
|
||||
The above definitions completely specify the desired solution of the false
|
||||
conflict behaviour described in the `ticket comment`_. However, they do not give
|
||||
a concrete algorithm to compute the follows relation, or a representation in the
|
||||
Tahoe-LAFS file store of the metadata needed to compute it.
|
||||
|
||||
We will consider two alternative designs, proposed by Leif Ryge and
|
||||
Zooko Wilcox-O'Hearn, that aim to fill this gap.
|
||||
|
||||
|
||||
|
||||
Leif's Proposal: Magic-Folder "single-file" snapshot design
|
||||
===========================================================
|
||||
|
||||
Abstract
|
||||
--------
|
||||
|
||||
We propose a relatively simple modification to the initial Magic Folder design which
|
||||
adds merkle DAGs of immutable historical snapshots for each file. The full history
|
||||
does not necessarily need to be retained, and the choice of how much history to retain
|
||||
can potentially be made on a per-file basis.
|
||||
|
||||
Motivation:
|
||||
-----------
|
||||
|
||||
no SPOFs, no admins
|
||||
```````````````````
|
||||
|
||||
Additionally, the initial design had two cases of excess authority:
|
||||
|
||||
1. The magic folder administrator (inviter) has everyone's write-caps and is thus essentially "root"
|
||||
2. Each client shares ambient authority and can delete anything or everything and
|
||||
(assuming there is not a conflict) the data will be deleted from all clients. So, each client
|
||||
is effectively "root" too.
|
||||
|
||||
Thus, while it is useful for file synchronization, the initial design is a much less safe place
|
||||
to store data than in a single mutable tahoe directory (because more client computers have the
|
||||
possibility to delete it).
|
||||
|
||||
|
||||
Glossary
|
||||
--------
|
||||
|
||||
- merkle DAG: like a merkle tree but with multiple roots, and with each node potentially having multiple parents
|
||||
- magic folder: a logical directory that can be synchronized between many clients
|
||||
(devices, users, ...) using a Tahoe-LAFS storage grid
|
||||
- client: a Magic-Folder-enabled Tahoe-LAFS client instance that has access to a magic folder
|
||||
- DMD: "distributed mutable directory", a physical Tahoe-LAFS mutable directory.
|
||||
Each client has the write cap to their own DMD, and read caps to all other client's DMDs
|
||||
(as in the original Magic Folder design).
|
||||
- snapshot: a reference to a version of a file; represented as an immutable directory containing
|
||||
an entry called "content" (pointing to the immutable file containing the file's contents),
|
||||
and an entry called "parent0" (pointing to a parent snapshot), and optionally parent1 through
|
||||
parentN pointing at other parents. The Magic Folder snapshot object is conceptually very similar
|
||||
to a git commit object, except for that it is created automatically and it records the history of an
|
||||
individual file rather than an entire repository. Also, commits do not need to have authors
|
||||
(although an author field could be easily added later).
|
||||
- deletion snapshot: immutable directory containing no content entry (only one or more parents)
|
||||
- capability: a Tahoe-LAFS diminishable cryptographic capability
|
||||
- cap: short for capability
|
||||
- conflict: the situation when another client's current snapshot for a file is different than our current snapshot, and is not a descendant of ours.
|
||||
- overwrite: the situation when another client's current snapshot for a file is a (not necessarily direct) descendant of our current snapshot.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
This new design will track the history of each file using "snapshots" which are
|
||||
created at each upload. Each snapshot will specify one or more parent snapshots,
|
||||
forming a directed acyclic graph. A Magic-Folder user's DMD uses a flattened directory
|
||||
hierarchy naming scheme, as in the original design. But, instead of pointing directly
|
||||
at file contents, each file name will link to that user's latest snapshot for that file.
|
||||
|
||||
Inside the dmd there will also be an immutable directory containing the client's subscriptions
|
||||
(read-caps to other clients' dmds).
|
||||
|
||||
Clients periodically poll each other's DMDs. When they see the current snapshot for a file is
|
||||
different than their own current snapshot for that file, they immediately begin downloading its
|
||||
contents and then walk backwards through the DAG from the new snapshot until they find their own
|
||||
snapshot or a common ancestor.
|
||||
|
||||
For the common ancestor search to be efficient, the client will need to keep a local store (in the magic folder db) of all of the snapshots
|
||||
(but not their contents) between the oldest current snapshot of any of their subscriptions and their own current snapshot.
|
||||
See "local cache purging policy" below for more details.
|
||||
|
||||
If the new snapshot is a descendant of the client's existing snapshot, then this update
|
||||
is an "overwrite" - like a git fast-forward. So, when the download of the new file completes it can overwrite
|
||||
the existing local file with the new contents and update its dmd to point at the new snapshot.
|
||||
|
||||
If the new snapshot is not a descendant of the client's current snapshot, then the update is a
|
||||
conflict. The new file is downloaded and named $filename.conflict-$user1,$user2 (including a list
|
||||
of other subscriptions who have that version as their current version).
|
||||
|
||||
Changes to the local .conflict- file are not tracked. When that file disappears
|
||||
(either by deletion, or being renamed) a new snapshot for the conflicting file is
|
||||
created which has two parents - the client's snapshot prior to the conflict, and the
|
||||
new conflicting snapshot. If multiple .conflict files are deleted or renamed in a short
|
||||
period of time, a single conflict-resolving snapshot with more than two parents can be created.
|
||||
|
||||
! I think this behavior will confuse users.
|
||||
|
||||
Tahoe-LAFS snapshot objects
|
||||
---------------------------
|
||||
|
||||
These Tahoe-LAFS snapshot objects only track the history of a single file, not a directory hierarchy.
|
||||
Snapshot objects contain only two field types:
|
||||
- ``Content``: an immutable capability of the file contents (omitted if deletion snapshot)
|
||||
- ``Parent0..N``: immutable capabilities representing parent snapshots
|
||||
|
||||
Therefore in this system an interesting side effect of this Tahoe snapshot object is that there is no
|
||||
snapshot author. The only notion of an identity in the Magic-Folder system is the write capability of the user's DMD.
|
||||
|
||||
The snapshot object is an immutable directory which looks like this:
|
||||
content -> immutable cap to file content
|
||||
parent0 -> immutable cap to a parent snapshot object
|
||||
parent1..N -> more parent snapshots
|
||||
|
||||
|
||||
Snapshot Author Identity
|
||||
------------------------
|
||||
|
||||
Snapshot identity might become an important feature so that bad actors
|
||||
can be recognized and other clients can stop "subscribing" to (polling for) updates from them.
|
||||
|
||||
Perhaps snapshots could be signed by the user's Magic-Folder write key for this purpose? Probably a bad idea to reuse the write-cap key for this. Better to introduce ed25519 identity keys which can (optionally) sign snapshot contents and store the signature as another member of the immutable directory.
|
||||
|
||||
|
||||
Conflict Resolution
|
||||
-------------------
|
||||
|
||||
detection of conflicts
|
||||
``````````````````````
|
||||
|
||||
A Magic-Folder client updates a given file's current snapshot link to a snapshot which is a descendent
|
||||
of the previous snapshot. For a given file, let's say "file1", Alice can detect that Bob's DMD has a "file1"
|
||||
that links to a snapshot which conflicts. Two snapshots conflict if one is not an ancestor of the other.
|
||||
|
||||
|
||||
a possible UI for resolving conflicts
|
||||
`````````````````````````````````````
|
||||
|
||||
If Alice links a conflicting snapshot object for a file named "file1",
|
||||
Bob and Carole will see a file in their Magic-Folder called "file1.conflicted.Alice".
|
||||
Alice conversely will see an additional file called "file1.conflicted.previous".
|
||||
If Alice wishes to resolve the conflict with her new version of the file then
|
||||
she simply deletes the file called "file1.conflicted.previous". If she wants to
|
||||
choose the other version then she moves it into place:
|
||||
|
||||
mv file1.conflicted.previous file1
|
||||
|
||||
|
||||
This scheme works for N number of conflicts. Bob for instance could choose
|
||||
the same resolution for the conflict, like this:
|
||||
|
||||
mv file1.Alice file1
|
||||
|
||||
|
||||
Deletion propagation and eventual Garbage Collection
|
||||
----------------------------------------------------
|
||||
|
||||
When a user deletes a file, this is represented by a link from their DMD file
|
||||
object to a deletion snapshot. Eventually all users will link this deletion
|
||||
snapshot into their DMD. When all users have the link then they locally cache
|
||||
the deletion snapshot and remove the link to that file in their DMD.
|
||||
Deletions can of course be undeleted; this means creating a new snapshot
|
||||
object that specifies itself a descent of the deletion snapshot.
|
||||
|
||||
Clients periodically renew leases to all capabilities recursively linked
|
||||
to in their DMD. Files which are unlinked by ALL the users of a
|
||||
given Magic-Folder will eventually be garbage collected.
|
||||
|
||||
Lease expirey duration must be tuned properly by storage servers such that
|
||||
Garbage Collection does not occur too frequently.
|
||||
|
||||
|
||||
|
||||
Performance Considerations
|
||||
--------------------------
|
||||
|
||||
local changes
|
||||
`````````````
|
||||
|
||||
Our old scheme requires two remote Tahoe-LAFS operations per local file modification:
|
||||
1. upload new file contents (as an immutable file)
|
||||
2. modify mutable directory (DMD) to link to the immutable file cap
|
||||
|
||||
Our new scheme requires three remote operations:
|
||||
1. upload new file contents (as in immutable file)
|
||||
2. upload immutable directory representing Tahoe-LAFS snapshot object
|
||||
3. modify mutable directory (DMD) to link to the immutable snapshot object
|
||||
|
||||
remote changes
|
||||
``````````````
|
||||
|
||||
Our old scheme requires one remote Tahoe-LAFS operation per remote file modification (not counting the polling of the dmd):
|
||||
1. Download new file content
|
||||
|
||||
Our new scheme requires a minimum of two remote operations (not counting the polling of the dmd) for conflicting downloads, or three remote operations for overwrite downloads:
|
||||
1. Download new snapshot object
|
||||
2. Download the content it points to
|
||||
3. If the download is an overwrite, modify the DMD to indicate that the downloaded version is their current version.
|
||||
|
||||
If the new snapshot is not a direct descendant of our current snapshot or the other party's previous snapshot we saw, we will also need to download more snapshots to determine if it is a conflict or an overwrite. However, those can be done in
|
||||
parallel with the content download since we will need to download the content in either case.
|
||||
|
||||
While the old scheme is obviously more efficient, we think that the properties provided by the new scheme make it worth the additional cost.
|
||||
|
||||
Physical updates to the DMD overiouslly need to be serialized, so multiple logical updates should be combined when an update is already in progress.
|
||||
|
||||
conflict detection and local caching
|
||||
````````````````````````````````````
|
||||
|
||||
Local caching of snapshots is important for performance.
|
||||
We refer to the client's local snapshot cache as the ``magic-folder db``.
|
||||
|
||||
Conflict detection can be expensive because it may require the client
|
||||
to download many snapshots from the other user's DMD in order to try
|
||||
and find it's own current snapshot or a descendent. The cost of scanning
|
||||
the remote DMDs should not be very high unless the client conducting the
|
||||
scan has lots of history to download because of being offline for a long
|
||||
time while many new snapshots were distributed.
|
||||
|
||||
|
||||
local cache purging policy
|
||||
``````````````````````````
|
||||
|
||||
The client's current snapshot for each file should be cached at all times.
|
||||
When all clients' views of a file are synchronized (they all have the same
|
||||
snapshot for that file), no ancestry for that file needs to be cached.
|
||||
When clients' views of a file are *not* synchronized, the most recent
|
||||
common ancestor of all clients' snapshots must be kept cached, as must
|
||||
all intermediate snapshots.
|
||||
|
||||
|
||||
Local Merge Property
|
||||
--------------------
|
||||
|
||||
Bob can in fact, set a pre-existing directory (with files) as his new Magic-Folder directory, resulting
|
||||
in a merge of the Magic-Folder with Bob's local directory. Filename collisions will result in conflicts
|
||||
because Bob's new snapshots are not descendent's of the existing Magic-Folder file snapshots.
|
||||
|
||||
|
||||
Example: simultaneous update with four parties:
|
||||
|
||||
1. A, B, C, D are in sync for file "foo" at snapshot X
|
||||
2. A and B simultaneously change the file, creating snapshots XA and XB (both descendants of X).
|
||||
3. C hears about XA first, and D hears about XB first. Both accept an overwrite.
|
||||
4. All four parties hear about the other update they hadn't heard about yet.
|
||||
5. Result:
|
||||
- everyone's local file "foo" has the content pointed to by the snapshot in their DMD's "foo" entry
|
||||
- A and C's DMDs each have the "foo" entry pointing at snapshot XA
|
||||
- B and D's DMDs each have the "foo" entry pointing at snapshot XB
|
||||
- A and C have a local file called foo.conflict-B,D with XB's content
|
||||
- B and D have a local file called foo.conflict-A,C with XA's content
|
||||
|
||||
Later:
|
||||
|
||||
- Everyone ignores the conflict, and continue updating their local "foo". but slowly enough that there are no further conflicts, so that A and C remain in sync with eachother, and B and D remain in sync with eachother.
|
||||
|
||||
- A and C's foo.conflict-B,D file continues to be updated with the latest version of the file B and D are working on, and vice-versa.
|
||||
|
||||
- A and C edit the file at the same time again, causing a new conflict.
|
||||
|
||||
- Local files are now:
|
||||
|
||||
A: "foo", "foo.conflict-B,D", "foo.conflict-C"
|
||||
|
||||
C: "foo", "foo.conflict-B,D", "foo.conflict-A"
|
||||
|
||||
B and D: "foo", "foo.conflict-A", "foo.conflict-C"
|
||||
|
||||
- Finally, D decides to look at "foo.conflict-A" and "foo.conflict-C", and they manually integrate (or decide to ignore) the differences into their own local file "foo".
|
||||
|
||||
- D deletes their conflict files.
|
||||
|
||||
- D's DMD now points to a snapshot that is a descendant of everyone else's current snapshot, resolving all conflicts.
|
||||
|
||||
- The conflict files on A, B, and C disappear, and everyone's local file "foo" contains D's manually-merged content.
|
||||
|
||||
|
||||
Daira: I think it is too complicated to include multiple nicknames in the .conflict files
|
||||
(e.g. "foo.conflict-B,D"). It should be sufficient to have one file for each other client,
|
||||
reflecting that client's latest version, regardless of who else it conflicts with.
|
||||
|
||||
|
||||
Zooko's Design (as interpreted by Daira)
|
||||
========================================
|
||||
|
||||
A version map is a mapping from client nickname to version number.
|
||||
|
||||
Definition: a version map M' strictly-follows a mapping M iff for every entry c->v
|
||||
in M, there is an entry c->v' in M' such that v' > v.
|
||||
|
||||
|
||||
Each client maintains a 'local version map' and a 'conflict version map' for each file
|
||||
in its magic folder db.
|
||||
If it has never written the file, then the entry for its own nickname in the local version
|
||||
map is zero. The conflict version map only contains entries for nicknames B where
|
||||
"$FILENAME.conflict-$B" exists.
|
||||
|
||||
When a client A uploads a file, it increments the version for its own nickname in its
|
||||
local version map for the file, and includes that map as metadata with its upload.
|
||||
|
||||
A download by client A from client B is an overwrite iff the downloaded version map
|
||||
strictly-follows A's local version map for that file; in this case A replaces its local
|
||||
version map with the downloaded version map. Otherwise it is a conflict, and the
|
||||
download is put into "$FILENAME.conflict-$B"; in this case A's
|
||||
local version map remains unchanged, and the entry B->v taken from the downloaded
|
||||
version map is added to its conflict version map.
|
||||
|
||||
If client A deletes or renames a conflict file "$FILENAME.conflict-$B", then A copies
|
||||
the entry for B from its conflict version map to its local version map, deletes
|
||||
the entry for B in its conflict version map, and performs another upload (with
|
||||
incremented version number) of $FILENAME.
|
||||
|
||||
|
||||
Example:
|
||||
A, B, C = (10, 20, 30) everyone agrees.
|
||||
A updates: (11, 20, 30)
|
||||
B updates: (10, 21, 30)
|
||||
|
||||
C will see either A or B first. Both would be an overwrite, if considered alone.
|
||||
|
||||
|
||||
|
@ -1,951 +0,0 @@
|
||||
Magic Folder design for remote-to-local sync
|
||||
============================================
|
||||
|
||||
Scope
|
||||
-----
|
||||
|
||||
In this Objective we will design remote-to-local synchronization:
|
||||
|
||||
* How to efficiently determine which objects (files and directories) have
|
||||
to be downloaded in order to bring the current local filesystem into sync
|
||||
with the newly-discovered version of the remote filesystem.
|
||||
* How to distinguish overwrites, in which the remote side was aware of
|
||||
your most recent version and overwrote it with a new version, from
|
||||
conflicts, in which the remote side was unaware of your most recent
|
||||
version when it published its new version. The latter needs to be raised
|
||||
to the user as an issue the user will have to resolve and the former must
|
||||
not bother the user.
|
||||
* How to overwrite the (stale) local versions of those objects with the
|
||||
newly acquired objects, while preserving backed-up versions of those
|
||||
overwritten objects in case the user didn't want this overwrite and wants
|
||||
to recover the old version.
|
||||
|
||||
Tickets on the Tahoe-LAFS trac with the `otf-magic-folder-objective4`_
|
||||
keyword are within the scope of the remote-to-local synchronization
|
||||
design.
|
||||
|
||||
.. _otf-magic-folder-objective4: https://tahoe-lafs.org/trac/tahoe-lafs/query?status=!closed&keywords=~otf-magic-folder-objective4
|
||||
|
||||
|
||||
Glossary
|
||||
''''''''
|
||||
|
||||
Object: a file or directory
|
||||
|
||||
DMD: distributed mutable directory
|
||||
|
||||
Folder: an abstract directory that is synchronized between clients.
|
||||
(A folder is not the same as the directory corresponding to it on
|
||||
any particular client, nor is it the same as a DMD.)
|
||||
|
||||
Collective: the set of clients subscribed to a given Magic Folder.
|
||||
|
||||
Descendant: a direct or indirect child in a directory or folder tree
|
||||
|
||||
Subfolder: a folder that is a descendant of a magic folder
|
||||
|
||||
Subpath: the path from a magic folder to one of its descendants
|
||||
|
||||
Write: a modification to a local filesystem object by a client
|
||||
|
||||
Read: a read from a local filesystem object by a client
|
||||
|
||||
Upload: an upload of a local object to the Tahoe-LAFS file store
|
||||
|
||||
Download: a download from the Tahoe-LAFS file store to a local object
|
||||
|
||||
Pending notification: a local filesystem change that has been detected
|
||||
but not yet processed.
|
||||
|
||||
|
||||
Representing the Magic Folder in Tahoe-LAFS
|
||||
-------------------------------------------
|
||||
|
||||
Unlike the local case where we use inotify or ReadDirectoryChangesW to
|
||||
detect filesystem changes, we have no mechanism to register a monitor for
|
||||
changes to a Tahoe-LAFS directory. Therefore, we must periodically poll
|
||||
for changes.
|
||||
|
||||
An important constraint on the solution is Tahoe-LAFS' ":doc:`write
|
||||
coordination directive<../../write_coordination>`", which prohibits
|
||||
concurrent writes by different storage clients to the same mutable object:
|
||||
|
||||
Tahoe does not provide locking of mutable files and directories. If
|
||||
there is more than one simultaneous attempt to change a mutable file
|
||||
or directory, then an UncoordinatedWriteError may result. This might,
|
||||
in rare cases, cause the file or directory contents to be accidentally
|
||||
deleted. The user is expected to ensure that there is at most one
|
||||
outstanding write or update request for a given file or directory at
|
||||
a time. One convenient way to accomplish this is to make a different
|
||||
file or directory for each person or process that wants to write.
|
||||
|
||||
Since it is a goal to allow multiple users to write to a Magic Folder,
|
||||
if the write coordination directive remains the same as above, then we
|
||||
will not be able to implement the Magic Folder as a single Tahoe-LAFS
|
||||
DMD. In general therefore, we will have multiple DMDs —spread across
|
||||
clients— that together represent the Magic Folder. Each client in a
|
||||
Magic Folder collective polls the other clients' DMDs in order to detect
|
||||
remote changes.
|
||||
|
||||
Six possible designs were considered for the representation of subfolders
|
||||
of the Magic Folder:
|
||||
|
||||
1. All subfolders written by a given Magic Folder client are collapsed
|
||||
into a single client DMD, containing immutable files. The child name of
|
||||
each file encodes the full subpath of that file relative to the Magic
|
||||
Folder.
|
||||
|
||||
2. The DMD tree under a client DMD is a direct copy of the folder tree
|
||||
written by that client to the Magic Folder. Not all subfolders have
|
||||
corresponding DMDs; only those to which that client has written files or
|
||||
child subfolders.
|
||||
|
||||
3. The directory tree under a client DMD is a ``tahoe backup`` structure
|
||||
containing immutable snapshots of the folder tree written by that client
|
||||
to the Magic Folder. As in design 2, only objects written by that client
|
||||
are present.
|
||||
|
||||
4. *Each* client DMD contains an eventually consistent mirror of all
|
||||
files and folders written by *any* Magic Folder client. Thus each client
|
||||
must also copy changes made by other Magic Folder clients to its own
|
||||
client DMD.
|
||||
|
||||
5. *Each* client DMD contains a ``tahoe backup`` structure containing
|
||||
immutable snapshots of all files and folders written by *any* Magic
|
||||
Folder client. Thus each client must also create another snapshot in its
|
||||
own client DMD when changes are made by another client. (It can potentially
|
||||
batch changes, subject to latency requirements.)
|
||||
|
||||
6. The write coordination problem is solved by implementing `two-phase
|
||||
commit`_. Then, the representation consists of a single DMD tree which is
|
||||
written by all clients.
|
||||
|
||||
.. _`two-phase commit`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1755
|
||||
|
||||
Here is a summary of advantages and disadvantages of each design:
|
||||
|
||||
+----------------------------+
|
||||
| Key |
|
||||
+=======+====================+
|
||||
| \+\+ | major advantage |
|
||||
+-------+--------------------+
|
||||
| \+ | minor advantage |
|
||||
+-------+--------------------+
|
||||
| ‒ | minor disadvantage |
|
||||
+-------+--------------------+
|
||||
| ‒ ‒ | major disadvantage |
|
||||
+-------+--------------------+
|
||||
| ‒ ‒ ‒ | showstopper |
|
||||
+-------+--------------------+
|
||||
|
||||
|
||||
123456+: All designs have the property that a recursive add-lease operation
|
||||
starting from a *collective directory* containing all of the client DMDs,
|
||||
will find all of the files and directories used in the Magic Folder
|
||||
representation. Therefore the representation is compatible with :doc:`garbage
|
||||
collection <../../garbage-collection>`, even when a pre-Magic-Folder client
|
||||
does the lease marking.
|
||||
|
||||
123456+: All designs avoid "breaking" pre-Magic-Folder clients that read
|
||||
a directory or file that is part of the representation.
|
||||
|
||||
456++: Only these designs allow a readcap to one of the client
|
||||
directories —or one of their subdirectories— to be directly shared
|
||||
with other Tahoe-LAFS clients (not necessarily Magic Folder clients),
|
||||
so that such a client sees all of the contents of the Magic Folder.
|
||||
Note that this was not a requirement of the OTF proposal, although it
|
||||
is useful.
|
||||
|
||||
135+: A Magic Folder client has only one mutable Tahoe-LAFS object to
|
||||
monitor per other client. This minimizes communication bandwidth for
|
||||
polling, or alternatively the latency possible for a given polling
|
||||
bandwidth.
|
||||
|
||||
1236+: A client does not need to make changes to its own DMD that repeat
|
||||
changes that another Magic Folder client had previously made. This reduces
|
||||
write bandwidth and complexity.
|
||||
|
||||
1‒: If the Magic Folder has many subfolders, their files will all be
|
||||
collapsed into the same DMD, which could get quite large. In practice a
|
||||
single DMD can easily handle the number of files expected to be written
|
||||
by a client, so this is unlikely to be a significant issue.
|
||||
|
||||
123‒ ‒: In these designs, the set of files in a Magic Folder is
|
||||
represented as the union of the files in all client DMDs. However,
|
||||
when a file is modified by more than one client, it will be linked
|
||||
from multiple client DMDs. We therefore need a mechanism, such as a
|
||||
version number or a monotonically increasing timestamp, to determine
|
||||
which copy takes priority.
|
||||
|
||||
35‒ ‒: When a Magic Folder client detects a remote change, it must
|
||||
traverse an immutable directory structure to see what has changed.
|
||||
Completely unchanged subtrees will have the same URI, allowing some of
|
||||
this traversal to be shortcutted.
|
||||
|
||||
24‒ ‒ ‒: When a Magic Folder client detects a remote change, it must
|
||||
traverse a mutable directory structure to see what has changed. This is
|
||||
more complex and less efficient than traversing an immutable structure,
|
||||
because shortcutting is not possible (each DMD retains the same URI even
|
||||
if a descendant object has changed), and because the structure may change
|
||||
while it is being traversed. Also the traversal needs to be robust
|
||||
against cycles, which can only occur in mutable structures.
|
||||
|
||||
45‒ ‒: When a change occurs in one Magic Folder client, it will propagate
|
||||
to all the other clients. Each client will therefore see multiple
|
||||
representation changes for a single logical change to the Magic Folder
|
||||
contents, and must suppress the duplicates. This is particularly
|
||||
problematic for design 4 where it interacts with the preceding issue.
|
||||
|
||||
4‒ ‒ ‒, 5‒ ‒: There is the potential for client DMDs to get "out of sync"
|
||||
with each other, potentially for long periods if errors occur. Thus each
|
||||
client must be able to "repair" its client directory (and its
|
||||
subdirectory structure) concurrently with performing its own writes. This
|
||||
is a significant complexity burden and may introduce failure modes that
|
||||
could not otherwise happen.
|
||||
|
||||
6‒ ‒ ‒: While two-phase commit is a well-established protocol, its
|
||||
application to Tahoe-LAFS requires significant design work, and may still
|
||||
leave some corner cases of the write coordination problem unsolved.
|
||||
|
||||
|
||||
+------------------------------------------------+-----------------------------------------+
|
||||
| Design Property | Designs Proposed |
|
||||
+================================================+======+======+======+======+======+======+
|
||||
| **advantages** | *1* | *2* | *3* | *4* | *5* | *6* |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Compatible with garbage collection |\+ |\+ |\+ |\+ |\+ |\+ |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Does not break old clients |\+ |\+ |\+ |\+ |\+ |\+ |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Allows direct sharing | | | |\+\+ |\+\+ |\+\+ |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Efficient use of bandwidth |\+ | |\+ | |\+ | |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| No repeated changes |\+ |\+ |\+ | | |\+ |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| **disadvantages** | *1* | *2* | *3* | *4* | *5* | *6* |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Can result in large DMDs |‒ | | | | | |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Need version number to determine priority |‒ ‒ |‒ ‒ |‒ ‒ | | | |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Must traverse immutable directory structure | | |‒ ‒ | |‒ ‒ | |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Must traverse mutable directory structure | |‒ ‒ ‒ | |‒ ‒ ‒ | | |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Must suppress duplicate representation changes | | | |‒ ‒ |‒ ‒ | |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| "Out of sync" problem | | | |‒ ‒ ‒ |‒ ‒ | |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
| Unsolved design problems | | | | | |‒ ‒ ‒ |
|
||||
+------------------------------------------------+------+------+------+------+------+------+
|
||||
|
||||
|
||||
Evaluation of designs
|
||||
'''''''''''''''''''''
|
||||
|
||||
Designs 2 and 3 have no significant advantages over design 1, while
|
||||
requiring higher polling bandwidth and greater complexity due to the need
|
||||
to create subdirectories. These designs were therefore rejected.
|
||||
|
||||
Design 4 was rejected due to the out-of-sync problem, which is severe
|
||||
and possibly unsolvable for mutable structures.
|
||||
|
||||
For design 5, the out-of-sync problem is still present but possibly
|
||||
solvable. However, design 5 is substantially more complex, less efficient
|
||||
in bandwidth/latency, and less scalable in number of clients and
|
||||
subfolders than design 1. It only gains over design 1 on the ability to
|
||||
share directory readcaps to the Magic Folder (or subfolders), which was
|
||||
not a requirement. It would be possible to implement this feature in
|
||||
future by switching to design 6.
|
||||
|
||||
For the time being, however, design 6 was considered out-of-scope for
|
||||
this project.
|
||||
|
||||
Therefore, design 1 was chosen. That is:
|
||||
|
||||
All subfolders written by a given Magic Folder client are collapsed
|
||||
into a single client DMD, containing immutable files. The child name
|
||||
of each file encodes the full subpath of that file relative to the
|
||||
Magic Folder.
|
||||
|
||||
Each directory entry in a DMD also stores a version number, so that the
|
||||
latest version of a file is well-defined when it has been modified by
|
||||
multiple clients.
|
||||
|
||||
To enable representing empty directories, a client that creates a
|
||||
directory should link a corresponding zero-length file in its DMD,
|
||||
at a name that ends with the encoded directory separator character.
|
||||
|
||||
We want to enable dynamic configuration of the membership of a Magic
|
||||
Folder collective, without having to reconfigure or restart each client
|
||||
when another client joins. To support this, we have a single collective
|
||||
directory that links to all of the client DMDs, named by their client
|
||||
nicknames. If the collective directory is mutable, then it is possible
|
||||
to change its contents in order to add clients. Note that a client DMD
|
||||
should not be unlinked from the collective directory unless all of its
|
||||
files are first copied to some other client DMD.
|
||||
|
||||
A client needs to be able to write to its own DMD, and read from other DMDs.
|
||||
To be consistent with the `Principle of Least Authority`_, each client's
|
||||
reference to its own DMD is a write capability, whereas its reference
|
||||
to the collective directory is a read capability. The latter transitively
|
||||
grants read access to all of the other client DMDs and the files linked
|
||||
from them, as required.
|
||||
|
||||
.. _`Principle of Least Authority`: http://www.eros-os.org/papers/secnotsep.pdf
|
||||
|
||||
Design and implementation of the user interface for maintaining this
|
||||
DMD structure and configuration will be addressed in Objectives 5 and 6.
|
||||
|
||||
During operation, each client will poll for changes on other clients
|
||||
at a predetermined frequency. On each poll, it will reread the collective
|
||||
directory (to allow for added or removed clients), and then read each
|
||||
client DMD linked from it.
|
||||
|
||||
"Hidden" files, and files with names matching the patterns used for backup,
|
||||
temporary, and conflicted files, will be ignored, i.e. not synchronized
|
||||
in either direction. A file is hidden if it has a filename beginning with
|
||||
"." (on any platform), or has the hidden or system attribute on Windows.
|
||||
|
||||
|
||||
Conflict Detection and Resolution
|
||||
---------------------------------
|
||||
|
||||
The combination of local filesystems and distributed objects is
|
||||
an example of shared state concurrency, which is highly error-prone
|
||||
and can result in race conditions that are complex to analyze.
|
||||
Unfortunately we have no option but to use shared state in this
|
||||
situation.
|
||||
|
||||
We call the resulting design issues "dragons" (as in "Here be dragons"),
|
||||
which as a convenient mnemonic we have named after the classical
|
||||
Greek elements Earth, Fire, Air, and Water.
|
||||
|
||||
Note: all filenames used in the following sections are examples,
|
||||
and the filename patterns we use in the actual implementation may
|
||||
differ. The actual patterns will probably include timestamps, and
|
||||
for conflicted files, the nickname of the client that last changed
|
||||
the file.
|
||||
|
||||
|
||||
Earth Dragons: Collisions between local filesystem operations and downloads
|
||||
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
|
||||
|
||||
Write/download collisions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Suppose that Alice's Magic Folder client is about to write a
|
||||
version of ``foo`` that it has downloaded in response to a remote
|
||||
change.
|
||||
|
||||
The criteria for distinguishing overwrites from conflicts are
|
||||
described later in the `Fire Dragons`_ section. Suppose that the
|
||||
remote change has been initially classified as an overwrite.
|
||||
(As we will see, it may be reclassified in some circumstances.)
|
||||
|
||||
.. _`Fire Dragons`: #fire-dragons-distinguishing-conflicts-from-overwrites
|
||||
|
||||
Note that writing a file that does not already have an entry in the
|
||||
:ref:`magic folder db<filesystem_integration-local-scanning-and-database>` is
|
||||
initially classed as an overwrite.
|
||||
|
||||
A *write/download collision* occurs when another program writes
|
||||
to ``foo`` in the local filesystem, concurrently with the new
|
||||
version being written by the Magic Folder client. We need to
|
||||
ensure that this does not cause data loss, as far as possible.
|
||||
|
||||
An important constraint on the design is that on Windows, it is
|
||||
not possible to rename a file to the same name as an existing
|
||||
file in that directory. Also, on Windows it may not be possible to
|
||||
delete or rename a file that has been opened by another process
|
||||
(depending on the sharing flags specified by that process).
|
||||
Therefore we need to consider carefully how to handle failure
|
||||
conditions.
|
||||
|
||||
In our proposed design, Alice's Magic Folder client follows
|
||||
this procedure for an overwrite in response to a remote change:
|
||||
|
||||
1. Write a temporary file, say ``.foo.tmp``.
|
||||
2. Use the procedure described in the `Fire Dragons_` section
|
||||
to obtain an initial classification as an overwrite or a
|
||||
conflict. (This takes as input the ``last_downloaded_uri``
|
||||
field from the directory entry of the changed ``foo``.)
|
||||
3. Set the ``mtime`` of the replacement file to be at least *T* seconds
|
||||
before the current local time. Stat the replacement file
|
||||
to obtain its ``mtime`` and ``ctime`` as stored in the local
|
||||
filesystem, and update the file's last-seen statinfo in
|
||||
the magic folder db with this information. (Note that the
|
||||
retrieved ``mtime`` may differ from the one that was set due
|
||||
to rounding.)
|
||||
4. Perform a ''file replacement'' operation (explained below)
|
||||
with backup filename ``foo.backup``, replaced file ``foo``,
|
||||
and replacement file ``.foo.tmp``. If any step of this
|
||||
operation fails, reclassify as a conflict and stop.
|
||||
|
||||
To reclassify as a conflict, attempt to rename ``.foo.tmp`` to
|
||||
``foo.conflicted``, suppressing errors.
|
||||
|
||||
The implementation of file replacement differs between Unix
|
||||
and Windows. On Unix, it can be implemented as follows:
|
||||
|
||||
* 4a. Stat the replaced path, and set the permissions of the
|
||||
replacement file to be the same as the replaced file,
|
||||
bitwise-or'd with octal 600 (``rw-------``). If the replaced
|
||||
file does not exist, set the permissions according to the
|
||||
user's umask. If there is a directory at the replaced path,
|
||||
fail.
|
||||
* 4b. Attempt to move the replaced file (``foo``) to the
|
||||
backup filename (``foo.backup``). If an ``ENOENT`` error
|
||||
occurs because the replaced file does not exist, ignore this
|
||||
error and continue with steps 4c and 4d.
|
||||
* 4c. Attempt to create a hard link at the replaced filename
|
||||
(``foo``) pointing to the replacement file (``.foo.tmp``).
|
||||
* 4d. Attempt to unlink the replacement file (``.foo.tmp``),
|
||||
suppressing errors.
|
||||
|
||||
Note that, if there is no conflict, the entry for ``foo``
|
||||
recorded in the :ref:`magic folder
|
||||
db<filesystem_integration-local-scanning-and-database>` will
|
||||
reflect the ``mtime`` set in step 3. The move operation in step
|
||||
4b will cause a ``MOVED_FROM`` event for ``foo``, and the link
|
||||
operation in step 4c will cause an ``IN_CREATE`` event for
|
||||
``foo``. However, these events will not trigger an upload,
|
||||
because they are guaranteed to be processed only after the file
|
||||
replacement has finished, at which point the last-seen statinfo
|
||||
recorded in the database entry will exactly match the metadata
|
||||
for the file's inode on disk. (The two hard links — ``foo``
|
||||
and, while it still exists, ``.foo.tmp`` — share the same inode
|
||||
and therefore the same metadata.)
|
||||
|
||||
On Windows, file replacement can be implemented by a call to
|
||||
the `ReplaceFileW`_ API (with the
|
||||
``REPLACEFILE_IGNORE_MERGE_ERRORS`` flag). If an error occurs
|
||||
because the replaced file does not exist, then we ignore this
|
||||
error and attempt to move the replacement file to the replaced
|
||||
file.
|
||||
|
||||
Similar to the Unix case, the `ReplaceFileW`_ operation will
|
||||
cause one or more change notifications for ``foo``. The replaced
|
||||
``foo`` has the same ``mtime`` as the replacement file, and so any
|
||||
such notification(s) will not trigger an unwanted upload.
|
||||
|
||||
.. _`ReplaceFileW`: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365512%28v=vs.85%29.aspx
|
||||
|
||||
To determine whether this procedure adequately protects against data
|
||||
loss, we need to consider what happens if another process attempts to
|
||||
update ``foo``, for example by renaming ``foo.other`` to ``foo``.
|
||||
This requires us to analyze all possible interleavings between the
|
||||
operations performed by the Magic Folder client and the other process.
|
||||
(Note that atomic operations on a directory are totally ordered.)
|
||||
The set of possible interleavings differs between Windows and Unix.
|
||||
|
||||
On Unix, for the case where the replaced file already exists, we have:
|
||||
|
||||
* Interleaving A: the other process' rename precedes our rename in
|
||||
step 4b, and we get an ``IN_MOVED_TO`` event for its rename by
|
||||
step 2. Then we reclassify as a conflict; its changes end up at
|
||||
``foo`` and ours end up at ``foo.conflicted``. This avoids data
|
||||
loss.
|
||||
|
||||
* Interleaving B: its rename precedes ours in step 4b, and we do
|
||||
not get an event for its rename by step 2. Its changes end up at
|
||||
``foo.backup``, and ours end up at ``foo`` after being linked there
|
||||
in step 4c. This avoids data loss.
|
||||
|
||||
* Interleaving C: its rename happens between our rename in step 4b,
|
||||
and our link operation in step 4c of the file replacement. The
|
||||
latter fails with an ``EEXIST`` error because ``foo`` already
|
||||
exists. We reclassify as a conflict; the old version ends up at
|
||||
``foo.backup``, the other process' changes end up at ``foo``, and
|
||||
ours at ``foo.conflicted``. This avoids data loss.
|
||||
|
||||
* Interleaving D: its rename happens after our link in step 4c, and
|
||||
causes an ``IN_MOVED_TO`` event for ``foo``. Its rename also changes
|
||||
the ``mtime`` for ``foo`` so that it is different from the ``mtime``
|
||||
calculated in step 3, and therefore different from the metadata
|
||||
recorded for ``foo`` in the magic folder db. (Assuming no system
|
||||
clock changes, its rename will set an ``mtime`` timestamp
|
||||
corresponding to a time after step 4c, which is after the timestamp
|
||||
*T* seconds before step 4a, provided that *T* seconds is
|
||||
sufficiently greater than the timestamp granularity.) Therefore, an
|
||||
upload will be triggered for ``foo`` after its change, which is
|
||||
correct and avoids data loss.
|
||||
|
||||
If the replaced file did not already exist, an ``ENOENT`` error
|
||||
occurs at step 4b, and we continue with steps 4c and 4d. The other
|
||||
process' rename races with our link operation in step 4c. If the
|
||||
other process wins the race then the effect is similar to
|
||||
Interleaving C, and if we win the race this it is similar to
|
||||
Interleaving D. Either case avoids data loss.
|
||||
|
||||
|
||||
On Windows, the internal implementation of `ReplaceFileW`_ is similar
|
||||
to what we have described above for Unix; it works like this:
|
||||
|
||||
* 4a′. Copy metadata (which does not include ``mtime``) from the
|
||||
replaced file (``foo``) to the replacement file (``.foo.tmp``).
|
||||
|
||||
* 4b′. Attempt to move the replaced file (``foo``) onto the
|
||||
backup filename (``foo.backup``), deleting the latter if it
|
||||
already exists.
|
||||
|
||||
* 4c′. Attempt to move the replacement file (``.foo.tmp``) to the
|
||||
replaced filename (``foo``); fail if the destination already
|
||||
exists.
|
||||
|
||||
Notice that this is essentially the same as the algorithm we use
|
||||
for Unix, but steps 4c and 4d on Unix are combined into a single
|
||||
step 4c′. (If there is a failure at steps 4c′ after step 4b′ has
|
||||
completed, the `ReplaceFileW`_ call will fail with return code
|
||||
``ERROR_UNABLE_TO_MOVE_REPLACEMENT_2``. However, it is still
|
||||
preferable to use this API over two `MoveFileExW`_ calls, because
|
||||
it retains the attributes and ACLs of ``foo`` where possible.
|
||||
Also note that if the `ReplaceFileW`_ call fails with
|
||||
``ERROR_FILE_NOT_FOUND`` because the replaced file does not exist,
|
||||
then the replacment operation ignores this error and continues with
|
||||
the equivalent of step 4c′, as on Unix.)
|
||||
|
||||
However, on Windows the other application will not be able to
|
||||
directly rename ``foo.other`` onto ``foo`` (which would fail because
|
||||
the destination already exists); it will have to rename or delete
|
||||
``foo`` first. Without loss of generality, let's say ``foo`` is
|
||||
deleted. This complicates the interleaving analysis, because we
|
||||
have two operations done by the other process interleaving with
|
||||
three done by the magic folder process (rather than one operation
|
||||
interleaving with four as on Unix).
|
||||
|
||||
So on Windows, for the case where the replaced file already exists,
|
||||
we have:
|
||||
|
||||
* Interleaving A′: the other process' deletion of ``foo`` and its
|
||||
rename of ``foo.other`` to ``foo`` both precede our rename in
|
||||
step 4b. We get an event corresponding to its rename by step 2.
|
||||
Then we reclassify as a conflict; its changes end up at ``foo``
|
||||
and ours end up at ``foo.conflicted``. This avoids data loss.
|
||||
|
||||
* Interleaving B′: the other process' deletion of ``foo`` and its
|
||||
rename of ``foo.other`` to ``foo`` both precede our rename in
|
||||
step 4b. We do not get an event for its rename by step 2.
|
||||
Its changes end up at ``foo.backup``, and ours end up at ``foo``
|
||||
after being moved there in step 4c′. This avoids data loss.
|
||||
|
||||
* Interleaving C′: the other process' deletion of ``foo`` precedes
|
||||
our rename of ``foo`` to ``foo.backup`` done by `ReplaceFileW`_,
|
||||
but its rename of ``foo.other`` to ``foo`` does not, so we get
|
||||
an ``ERROR_FILE_NOT_FOUND`` error from `ReplaceFileW`_ indicating
|
||||
that the replaced file does not exist. We ignore this error and
|
||||
attempt to move ``foo.tmp`` to ``foo``, racing with the other
|
||||
process which is attempting to move ``foo.other`` to ``foo``.
|
||||
If we win the race, then our changes end up at ``foo``, and the
|
||||
other process' move fails. If the other process wins the race,
|
||||
then its changes end up at ``foo``, our move fails, and we
|
||||
reclassify as a conflict, so that our changes end up at
|
||||
``foo.conflicted``. Either possibility avoids data loss.
|
||||
|
||||
* Interleaving D′: the other process' deletion and/or rename happen
|
||||
during the call to `ReplaceFileW`_, causing the latter to fail.
|
||||
There are two subcases:
|
||||
|
||||
* if the error is ``ERROR_UNABLE_TO_MOVE_REPLACEMENT_2``, then
|
||||
``foo`` is renamed to ``foo.backup`` and ``.foo.tmp`` remains
|
||||
at its original name after the call.
|
||||
* for all other errors, ``foo`` and ``.foo.tmp`` both remain at
|
||||
their original names after the call.
|
||||
|
||||
In both subcases, we reclassify as a conflict and rename ``.foo.tmp``
|
||||
to ``foo.conflicted``. This avoids data loss.
|
||||
|
||||
* Interleaving E′: the other process' deletion of ``foo`` and attempt
|
||||
to rename ``foo.other`` to ``foo`` both happen after all internal
|
||||
operations of `ReplaceFileW`_ have completed. This causes deletion
|
||||
and rename events for ``foo`` (which will in practice be merged due
|
||||
to the pending delay, although we don't rely on that for
|
||||
correctness). The rename also changes the ``mtime`` for ``foo`` so
|
||||
that it is different from the ``mtime`` calculated in step 3, and
|
||||
therefore different from the metadata recorded for ``foo`` in the
|
||||
magic folder db. (Assuming no system clock changes, its rename will
|
||||
set an ``mtime`` timestamp corresponding to a time after the
|
||||
internal operations of `ReplaceFileW`_ have completed, which is
|
||||
after the timestamp *T* seconds before `ReplaceFileW`_ is called,
|
||||
provided that *T* seconds is sufficiently greater than the timestamp
|
||||
granularity.) Therefore, an upload will be triggered for ``foo``
|
||||
after its change, which is correct and avoids data loss.
|
||||
|
||||
.. _`MoveFileExW`: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240%28v=vs.85%29.aspx
|
||||
|
||||
If the replaced file did not already exist, we get an
|
||||
``ERROR_FILE_NOT_FOUND`` error from `ReplaceFileW`_, and attempt to
|
||||
move ``foo.tmp`` to ``foo``. This is similar to Interleaving C, and
|
||||
either possibility for the resulting race avoids data loss.
|
||||
|
||||
We also need to consider what happens if another process opens ``foo``
|
||||
and writes to it directly, rather than renaming another file onto it:
|
||||
|
||||
* On Unix, open file handles refer to inodes, not paths. If the other
|
||||
process opens ``foo`` before it has been renamed to ``foo.backup``,
|
||||
and then closes the file, changes will have been written to the file
|
||||
at the same inode, even if that inode is now linked at ``foo.backup``.
|
||||
This avoids data loss.
|
||||
|
||||
* On Windows, we have two subcases, depending on whether the sharing
|
||||
flags specified by the other process when it opened its file handle
|
||||
included ``FILE_SHARE_DELETE``. (This flag covers both deletion and
|
||||
rename operations.)
|
||||
|
||||
i. If the sharing flags *do not* allow deletion/renaming, the
|
||||
`ReplaceFileW`_ operation will fail without renaming ``foo``.
|
||||
In this case we will end up with ``foo`` changed by the other
|
||||
process, and the downloaded file still in ``foo.tmp``.
|
||||
This avoids data loss.
|
||||
|
||||
ii. If the sharing flags *do* allow deletion/renaming, then
|
||||
data loss or corruption may occur. This is unavoidable and
|
||||
can be attributed to other process making a poor choice of
|
||||
sharing flags (either explicitly if it used `CreateFile`_, or
|
||||
via whichever higher-level API it used).
|
||||
|
||||
.. _`CreateFile`: https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858%28v=vs.85%29.aspx
|
||||
|
||||
Note that it is possible that another process tries to open the file
|
||||
between steps 4b and 4c (or 4b′ and 4c′ on Windows). In this case the
|
||||
open will fail because ``foo`` does not exist. Nevertheless, no data
|
||||
will be lost, and in many cases the user will be able to retry the
|
||||
operation.
|
||||
|
||||
Above we only described the case where the download was initially
|
||||
classified as an overwrite. If it was classed as a conflict, the
|
||||
procedure is the same except that we choose a unique filename
|
||||
for the conflicted file (say, ``foo.conflicted_unique``). We write
|
||||
the new contents to ``.foo.tmp`` and then rename it to
|
||||
``foo.conflicted_unique`` in such a way that the rename will fail
|
||||
if the destination already exists. (On Windows this is a simple
|
||||
rename; on Unix it can be implemented as a link operation followed
|
||||
by an unlink, similar to steps 4c and 4d above.) If this fails
|
||||
because another process wrote ``foo.conflicted_unique`` after we
|
||||
chose the filename, then we retry with a different filename.
|
||||
|
||||
|
||||
Read/download collisions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A *read/download collision* occurs when another program reads
|
||||
from ``foo`` in the local filesystem, concurrently with the new
|
||||
version being written by the Magic Folder client. We want to
|
||||
ensure that any successful attempt to read the file by the other
|
||||
program obtains a consistent view of its contents.
|
||||
|
||||
On Unix, the above procedure for writing downloads is sufficient
|
||||
to achieve this. There are three cases:
|
||||
|
||||
* A. The other process opens ``foo`` for reading before it is
|
||||
renamed to ``foo.backup``. Then the file handle will continue to
|
||||
refer to the old file across the rename, and the other process
|
||||
will read the old contents.
|
||||
|
||||
* B. The other process attempts to open ``foo`` after it has been
|
||||
renamed to ``foo.backup``, and before it is linked in step c.
|
||||
The open call fails, which is acceptable.
|
||||
|
||||
* C. The other process opens ``foo`` after it has been linked to
|
||||
the new file. Then it will read the new contents.
|
||||
|
||||
On Windows, the analysis is very similar, but case A′ needs to
|
||||
be split into two subcases, depending on the sharing mode the other
|
||||
process uses when opening the file for reading:
|
||||
|
||||
* A′. The other process opens ``foo`` before the Magic Folder
|
||||
client's attempt to rename ``foo`` to ``foo.backup`` (as part
|
||||
of the implementation of `ReplaceFileW`_). The subcases are:
|
||||
|
||||
i. The other process uses sharing flags that deny deletion and
|
||||
renames. The `ReplaceFileW`_ call fails, and the download is
|
||||
reclassified as a conflict. The downloaded file ends up at
|
||||
``foo.conflicted``, which is correct.
|
||||
|
||||
ii. The other process uses sharing flags that allow deletion
|
||||
and renames. The `ReplaceFileW`_ call succeeds, and the
|
||||
other process reads inconsistent data. This can be attributed
|
||||
to a poor choice of sharing flags by the other process.
|
||||
|
||||
* B′. The other process attempts to open ``foo`` at the point
|
||||
during the `ReplaceFileW`_ call where it does not exist.
|
||||
The open call fails, which is acceptable.
|
||||
|
||||
* C′. The other process opens ``foo`` after it has been linked to
|
||||
the new file. Then it will read the new contents.
|
||||
|
||||
|
||||
For both write/download and read/download collisions, we have
|
||||
considered only interleavings with a single other process, and
|
||||
only the most common possibilities for the other process'
|
||||
interaction with the file. If multiple other processes are
|
||||
involved, or if a process performs operations other than those
|
||||
considered, then we cannot say much about the outcome in general;
|
||||
however, we believe that such cases will be much less common.
|
||||
|
||||
|
||||
|
||||
Fire Dragons: Distinguishing conflicts from overwrites
|
||||
''''''''''''''''''''''''''''''''''''''''''''''''''''''
|
||||
|
||||
When synchronizing a file that has changed remotely, the Magic Folder
|
||||
client needs to distinguish between overwrites, in which the remote
|
||||
side was aware of your most recent version (if any) and overwrote it
|
||||
with a new version, and conflicts, in which the remote side was unaware
|
||||
of your most recent version when it published its new version. Those two
|
||||
cases have to be handled differently — the latter needs to be raised
|
||||
to the user as an issue the user will have to resolve and the former
|
||||
must not bother the user.
|
||||
|
||||
For example, suppose that Alice's Magic Folder client sees a change
|
||||
to ``foo`` in Bob's DMD. If the version it downloads from Bob's DMD
|
||||
is "based on" the version currently in Alice's local filesystem at
|
||||
the time Alice's client attempts to write the downloaded file ‒or if
|
||||
there is no existing version in Alice's local filesystem at that time‒
|
||||
then it is an overwrite. Otherwise it is initially classified as a
|
||||
conflict.
|
||||
|
||||
This initial classification is used by the procedure for writing a
|
||||
file described in the `Earth Dragons`_ section above. As explained
|
||||
in that section, we may reclassify an overwrite as a conflict if an
|
||||
error occurs during the write procedure.
|
||||
|
||||
.. _`Earth Dragons`: #earth-dragons-collisions-between-local-filesystem-operations-and-downloads
|
||||
|
||||
In order to implement this policy, we need to specify how the
|
||||
"based on" relation between file versions is recorded and updated.
|
||||
|
||||
We propose to record this information:
|
||||
|
||||
* in the :ref:`magic folder
|
||||
db<filesystem_integration-local-scanning-and-database>`, for
|
||||
local files;
|
||||
* in the Tahoe-LAFS directory metadata, for files stored in the
|
||||
Magic Folder.
|
||||
|
||||
In the magic folder db we will add a *last-downloaded record*,
|
||||
consisting of ``last_downloaded_uri`` and ``last_downloaded_timestamp``
|
||||
fields, for each path stored in the database. Whenever a Magic Folder
|
||||
client downloads a file, it stores the downloaded version's URI and
|
||||
the current local timestamp in this record. Since only immutable
|
||||
files are used, the URI will be an immutable file URI, which is
|
||||
deterministically and uniquely derived from the file contents and
|
||||
the Tahoe-LAFS node's :doc:`convergence secret<../../convergence-secret>`.
|
||||
|
||||
(Note that the last-downloaded record is updated regardless of
|
||||
whether the download is an overwrite or a conflict. The rationale
|
||||
for this to avoid "conflict loops" between clients, where every
|
||||
new version after the first conflict would be considered as another
|
||||
conflict.)
|
||||
|
||||
Later, in response to a local filesystem change at a given path, the
|
||||
Magic Folder client reads the last-downloaded record associated with
|
||||
that path (if any) from the database and then uploads the current
|
||||
file. When it links the uploaded file into its client DMD, it
|
||||
includes the ``last_downloaded_uri`` field in the metadata of the
|
||||
directory entry, overwriting any existing field of that name. If
|
||||
there was no last-downloaded record associated with the path, this
|
||||
field is omitted.
|
||||
|
||||
Note that ``last_downloaded_uri`` field does *not* record the URI of
|
||||
the uploaded file (which would be redundant); it records the URI of
|
||||
the last download before the local change that caused the upload.
|
||||
The field will be absent if the file has never been downloaded by
|
||||
this client (i.e. if it was created on this client and no change
|
||||
by any other client has been detected).
|
||||
|
||||
A possible refinement also takes into account the
|
||||
``last_downloaded_timestamp`` field from the magic folder db, and
|
||||
compares it to the timestamp of the change that caused the upload
|
||||
(which should be later, assuming no system clock changes).
|
||||
If the duration between these timestamps is very short, then we
|
||||
are uncertain about whether the process on Bob's system that wrote
|
||||
the local file could have taken into account the last download.
|
||||
We can use this information to be conservative about treating
|
||||
changes as conflicts. So, if the duration is less than a configured
|
||||
threshold, we omit the ``last_downloaded_uri`` field from the
|
||||
metadata. This will have the effect of making other clients treat
|
||||
this change as a conflict whenever they already have a copy of the
|
||||
file.
|
||||
|
||||
Conflict/overwrite decision algorithm
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Now we are ready to describe the algorithm for determining whether a
|
||||
download for the file ``foo`` is an overwrite or a conflict (refining
|
||||
step 2 of the procedure from the `Earth Dragons`_ section).
|
||||
|
||||
Let ``last_downloaded_uri`` be the field of that name obtained from
|
||||
the directory entry metadata for ``foo`` in Bob's DMD (this field
|
||||
may be absent). Then the algorithm is:
|
||||
|
||||
* 2a. Attempt to "stat" ``foo`` to get its *current statinfo* (size
|
||||
in bytes, ``mtime``, and ``ctime``). If Alice has no local copy
|
||||
of ``foo``, classify as an overwrite.
|
||||
|
||||
* 2b. Read the following information for the path ``foo`` from the
|
||||
local magic folder db:
|
||||
|
||||
* the *last-seen statinfo*, if any (this is the size in
|
||||
bytes, ``mtime``, and ``ctime`` stored in the ``local_files``
|
||||
table when the file was last uploaded);
|
||||
* the ``last_uploaded_uri`` field of the ``local_files`` table
|
||||
for this file, which is the URI under which the file was last
|
||||
uploaded.
|
||||
|
||||
* 2c. If any of the following are true, then classify as a conflict:
|
||||
|
||||
* i. there are pending notifications of changes to ``foo``;
|
||||
* ii. the last-seen statinfo is either absent (i.e. there is
|
||||
no entry in the database for this path), or different from the
|
||||
current statinfo;
|
||||
* iii. either ``last_downloaded_uri`` or ``last_uploaded_uri``
|
||||
(or both) are absent, or they are different.
|
||||
|
||||
Otherwise, classify as an overwrite.
|
||||
|
||||
|
||||
Air Dragons: Collisions between local writes and uploads
|
||||
''''''''''''''''''''''''''''''''''''''''''''''''''''''''
|
||||
|
||||
Short of filesystem-specific features on Unix or the `shadow copy service`_
|
||||
on Windows (which is per-volume and therefore difficult to use in this
|
||||
context), there is no way to *read* the whole contents of a file
|
||||
atomically. Therefore, when we read a file in order to upload it, we
|
||||
may read an inconsistent version if it was also being written locally.
|
||||
|
||||
.. _`shadow copy service`: https://technet.microsoft.com/en-us/library/ee923636%28v=ws.10%29.aspx
|
||||
|
||||
A well-behaved application can avoid this problem for its writes:
|
||||
|
||||
* On Unix, if another process modifies a file by renaming a temporary
|
||||
file onto it, then we will consistently read either the old contents
|
||||
or the new contents.
|
||||
* On Windows, if the other process uses sharing flags to deny reads
|
||||
while it is writing a file, then we will consistently read either
|
||||
the old contents or the new contents, unless a sharing error occurs.
|
||||
In the case of a sharing error we should retry later, up to a
|
||||
maximum number of retries.
|
||||
|
||||
In the case of a not-so-well-behaved application writing to a file
|
||||
at the same time we read from it, the magic folder will still be
|
||||
eventually consistent, but inconsistent versions may be visible to
|
||||
other users' clients.
|
||||
|
||||
In Objective 2 we implemented a delay, called the *pending delay*,
|
||||
after the notification of a filesystem change and before the file is
|
||||
read in order to upload it (Tahoe-LAFS ticket `#1440`_). If another
|
||||
change notification occurs within the pending delay time, the delay
|
||||
is restarted. This helps to some extent because it means that if
|
||||
files are written more quickly than the pending delay and less
|
||||
frequently than the pending delay, we shouldn't encounter this
|
||||
inconsistency.
|
||||
|
||||
.. _`#1440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1440
|
||||
|
||||
The likelihood of inconsistency could be further reduced, even for
|
||||
writes by not-so-well-behaved applications, by delaying the actual
|
||||
upload for a further period —called the *stability delay*— after the
|
||||
file has finished being read. If a notification occurs between the
|
||||
end of the pending delay and the end of the stability delay, then
|
||||
the read would be aborted and the notification requeued.
|
||||
|
||||
This would have the effect of ensuring that no write notifications
|
||||
have been received for the file during a time window that brackets
|
||||
the period when it was being read, with margin before and after
|
||||
this period defined by the pending and stability delays. The delays
|
||||
are intended to account for asynchronous notification of events, and
|
||||
caching in the filesystem.
|
||||
|
||||
Note however that we cannot guarantee that the delays will be long
|
||||
enough to prevent inconsistency in any particular case. Also, the
|
||||
stability delay would potentially affect performance significantly
|
||||
because (unlike the pending delay) it is not overlapped when there
|
||||
are multiple files on the upload queue. This performance impact
|
||||
could be mitigated by uploading files in parallel where possible
|
||||
(Tahoe-LAFS ticket `#1459`_).
|
||||
|
||||
We have not yet decided whether to implement the stability delay, and
|
||||
it is not planned to be implemented for the OTF objective 4 milestone.
|
||||
Ticket `#2431`_ has been opened to track this idea.
|
||||
|
||||
.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
|
||||
.. _`#2431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2431
|
||||
|
||||
Note that the situation of both a local process and the Magic Folder
|
||||
client reading a file at the same time cannot cause any inconsistency.
|
||||
|
||||
|
||||
Water Dragons: Handling deletion and renames
|
||||
''''''''''''''''''''''''''''''''''''''''''''
|
||||
|
||||
Deletion of a file
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When a file is deleted from the filesystem of a Magic Folder client,
|
||||
the most intuitive behavior is for it also to be deleted under that
|
||||
name from other clients. To avoid data loss, the other clients should
|
||||
actually rename their copies to a backup filename.
|
||||
|
||||
It would not be sufficient for a Magic Folder client that deletes
|
||||
a file to implement this simply by removing the directory entry from
|
||||
its DMD. Indeed, the entry may not exist in the client's DMD if it
|
||||
has never previously changed the file.
|
||||
|
||||
Instead, the client links a zero-length file into its DMD and sets
|
||||
``deleted: true`` in the directory entry metadata. Other clients
|
||||
take this as a signal to rename their copies to the backup filename.
|
||||
|
||||
Note that the entry for this zero-length file has a version number as
|
||||
usual, and later versions may restore the file.
|
||||
|
||||
When the downloader deletes a file (or renames it to a filename
|
||||
ending in ``.backup``) in response to a remote change, a local
|
||||
filesystem notification will occur, and we must make sure that this
|
||||
is not treated as a local change. To do this we have the downloader
|
||||
set the ``size`` field in the magic folder db to ``None`` (SQL NULL)
|
||||
just before deleting the file, and suppress notifications for which
|
||||
the local file does not exist, and the recorded ``size`` field is
|
||||
``None``.
|
||||
|
||||
When a Magic Folder client restarts, we can detect files that had
|
||||
been downloaded but were deleted while it was not running, because
|
||||
their paths will have last-downloaded records in the magic folder db
|
||||
with a ``size`` other than ``None``, and without any corresponding
|
||||
local file.
|
||||
|
||||
Deletion of a directory
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Local filesystems (unlike a Tahoe-LAFS filesystem) normally cannot
|
||||
unlink a directory that has any remaining children. Therefore a
|
||||
Magic Folder client cannot delete local copies of directories in
|
||||
general, because they will typically contain backup files. This must
|
||||
be done manually on each client if desired.
|
||||
|
||||
Nevertheless, a Magic Folder client that deletes a directory should
|
||||
set ``deleted: true`` on the metadata entry for the corresponding
|
||||
zero-length file. This avoids the directory being recreated after
|
||||
it has been manually deleted from a client.
|
||||
|
||||
Renaming
|
||||
~~~~~~~~
|
||||
|
||||
It is sufficient to handle renaming of a file by treating it as a
|
||||
deletion and an addition under the new name.
|
||||
|
||||
This also applies to directories, although users may find the
|
||||
resulting behavior unintuitive: all of the files under the old name
|
||||
will be renamed to backup filenames, and a new directory structure
|
||||
created under the new name. We believe this is the best that can be
|
||||
done without imposing unreasonable implementation complexity.
|
||||
|
||||
|
||||
Summary
|
||||
-------
|
||||
|
||||
This completes the design of remote-to-local synchronization.
|
||||
We realize that it may seem very complicated. Anecdotally, proprietary
|
||||
filesystem synchronization designs we are aware of, such as Dropbox,
|
||||
are said to incur similar or greater design complexity.
|
@ -1,205 +0,0 @@
|
||||
Magic Folder user interface design
|
||||
==================================
|
||||
|
||||
Scope
|
||||
-----
|
||||
|
||||
In this Objective we will design a user interface to allow users to conveniently
|
||||
and securely indicate which folders on some devices should be "magically" linked
|
||||
to which folders on other devices.
|
||||
|
||||
This is a critical usability and security issue for which there is no known perfect
|
||||
solution, but which we believe is amenable to a "good enough" trade-off solution.
|
||||
This document explains the design and justifies its trade-offs in terms of security,
|
||||
usability, and time-to-market.
|
||||
|
||||
Tickets on the Tahoe-LAFS trac with the `otf-magic-folder-objective6`_
|
||||
keyword are within the scope of the user interface design.
|
||||
|
||||
.. _otf-magic-folder-objective6: https://tahoe-lafs.org/trac/tahoe-lafs/query?status=!closed&keywords=~otf-magic-folder-objective6
|
||||
|
||||
Glossary
|
||||
''''''''
|
||||
|
||||
Object: a file or directory
|
||||
|
||||
DMD: distributed mutable directory
|
||||
|
||||
Folder: an abstract directory that is synchronized between clients.
|
||||
(A folder is not the same as the directory corresponding to it on
|
||||
any particular client, nor is it the same as a DMD.)
|
||||
|
||||
Collective: the set of clients subscribed to a given Magic Folder.
|
||||
|
||||
Diminishing: the process of deriving, from an existing capability,
|
||||
another capability that gives less authority (for example, deriving a
|
||||
read cap from a read/write cap).
|
||||
|
||||
|
||||
Design Constraints
|
||||
------------------
|
||||
|
||||
The design of the Tahoe-side representation of a Magic Folder, and the
|
||||
polling mechanism that the Magic Folder clients will use to detect remote
|
||||
changes was discussed in :doc:`remote-to-local-sync<remote-to-local-sync>`,
|
||||
and we will not revisit that here. The assumption made by that design was
|
||||
that each client would be configured with the following information:
|
||||
|
||||
* a write cap to its own *client DMD*.
|
||||
* a read cap to a *collective directory*.
|
||||
|
||||
The collective directory contains links to each client DMD named by the
|
||||
corresponding client's nickname.
|
||||
|
||||
This design was chosen to allow straightforward addition of clients without
|
||||
requiring each existing client to change its configuration.
|
||||
|
||||
Note that each client in a Magic Folder collective has the authority to add,
|
||||
modify or delete any object within the Magic Folder. It is also able to control
|
||||
to some extent whether its writes will be treated by another client as overwrites
|
||||
or as conflicts. However, there is still a reliability benefit to preventing a
|
||||
client from accidentally modifying another client's DMD, or from accidentally
|
||||
modifying the collective directory in a way that would lose data. This motivates
|
||||
ensuring that each client only has access to the caps above, rather than, say,
|
||||
every client having a write cap to the collective directory.
|
||||
|
||||
Another important design constraint is that we cannot violate the :doc:`write
|
||||
coordination directive<../../write_coordination>`; that is, we cannot write to
|
||||
the same mutable directory from multiple clients, even during the setup phase
|
||||
when adding a client.
|
||||
|
||||
Within these constraints, for usability we want to minimize the number of steps
|
||||
required to configure a Magic Folder collective.
|
||||
|
||||
|
||||
Proposed Design
|
||||
---------------
|
||||
|
||||
Three ``tahoe`` subcommands are added::
|
||||
|
||||
tahoe magic-folder create MAGIC: [MY_NICKNAME LOCAL_DIR]
|
||||
|
||||
Create an empty Magic Folder. The MAGIC: local alias is set
|
||||
to a write cap which can be used to refer to this Magic Folder
|
||||
in future ``tahoe magic-folder invite`` commands.
|
||||
|
||||
If MY_NICKNAME and LOCAL_DIR are given, the current client
|
||||
immediately joins the newly created Magic Folder with that
|
||||
nickname and local directory.
|
||||
|
||||
|
||||
tahoe magic-folder invite MAGIC: THEIR_NICKNAME
|
||||
|
||||
Print an "invitation" that can be used to invite another
|
||||
client to join a Magic Folder, with the given nickname.
|
||||
|
||||
The invitation must be sent to the user of the other client
|
||||
over a secure channel (e.g. PGP email, OTR, or ssh).
|
||||
|
||||
This command will normally be run by the same client that
|
||||
created the Magic Folder. However, it may be run by a
|
||||
different client if the ``MAGIC:`` alias is copied to
|
||||
the ``private/aliases`` file of that other client, or if
|
||||
``MAGIC:`` is replaced by the write cap to which it points.
|
||||
|
||||
|
||||
tahoe magic-folder join INVITATION LOCAL_DIR
|
||||
|
||||
Accept an invitation created by ``tahoe magic-folder invite``.
|
||||
The current client joins the specified Magic Folder, which will
|
||||
appear in the local filesystem at the given directory.
|
||||
|
||||
|
||||
There are no commands to remove a client or to revoke an
|
||||
invitation, although those are possible features that could
|
||||
be added in future. (When removing a client, it is necessary
|
||||
to copy each file it added to some other client's DMD, if it
|
||||
is the most recent version of that file.)
|
||||
|
||||
|
||||
Implementation
|
||||
''''''''''''''
|
||||
|
||||
For "``tahoe magic-folder create MAGIC: [MY_NICKNAME LOCAL_DIR]``" :
|
||||
|
||||
1. Run "``tahoe create-alias MAGIC:``".
|
||||
2. If ``MY_NICKNAME`` and ``LOCAL_DIR`` are given, do the equivalent of::
|
||||
|
||||
INVITATION=`tahoe invite-magic-folder MAGIC: MY_NICKNAME`
|
||||
tahoe join-magic-folder INVITATION LOCAL_DIR
|
||||
|
||||
|
||||
For "``tahoe magic-folder invite COLLECTIVE_WRITECAP NICKNAME``" :
|
||||
|
||||
(``COLLECTIVE_WRITECAP`` can, as a special case, be an alias such as ``MAGIC:``.)
|
||||
|
||||
1. Create an empty client DMD. Let its write URI be ``CLIENT_WRITECAP``.
|
||||
2. Diminish ``CLIENT_WRITECAP`` to ``CLIENT_READCAP``, and
|
||||
diminish ``COLLECTIVE_WRITECAP`` to ``COLLECTIVE_READCAP``.
|
||||
3. Run "``tahoe ln CLIENT_READCAP COLLECTIVE_WRITECAP/NICKNAME``".
|
||||
4. Print "``COLLECTIVE_READCAP+CLIENT_WRITECAP``" as the invitation,
|
||||
accompanied by instructions on how to accept the invitation and
|
||||
the need to send it over a secure channel.
|
||||
|
||||
|
||||
For "``tahoe magic-folder join INVITATION LOCAL_DIR``" :
|
||||
|
||||
1. Parse ``INVITATION`` as ``COLLECTIVE_READCAP+CLIENT_WRITECAP``.
|
||||
2. Write ``CLIENT_WRITECAP`` to the file ``magic_folder_dircap``
|
||||
under the client's ``private`` directory.
|
||||
3. Write ``COLLECTIVE_READCAP`` to the file ``collective_dircap``
|
||||
under the client's ``private`` directory.
|
||||
4. Edit the client's ``tahoe.cfg`` to set
|
||||
``[magic_folder] enabled = True`` and
|
||||
``[magic_folder] local.directory = LOCAL_DIR``.
|
||||
|
||||
|
||||
Discussion
|
||||
----------
|
||||
|
||||
The proposed design has a minor violation of the
|
||||
`Principle of Least Authority`_ in order to reduce the number
|
||||
of steps needed. The invoker of "``tahoe magic-folder invite``"
|
||||
creates the client DMD on behalf of the invited client, and
|
||||
could retain its write cap (which is part of the invitation).
|
||||
|
||||
.. _`Principle of Least Authority`: http://www.eros-os.org/papers/secnotsep.pdf
|
||||
|
||||
A possible alternative design would be for the invited client
|
||||
to create its own client DMD, and send it back to the inviter
|
||||
to be linked into the collective directory. However this would
|
||||
require another secure communication and another command
|
||||
invocation per client. Given that, as mentioned earlier, each
|
||||
client in a Magic Folder collective already has the authority
|
||||
to add, modify or delete any object within the Magic Folder,
|
||||
we considered the potential security/reliability improvement
|
||||
here not to be worth the loss of usability.
|
||||
|
||||
We also considered a design where each client had write access to
|
||||
the collective directory. This would arguably be a more serious
|
||||
violation of the Principle of Least Authority than the one above
|
||||
(because all clients would have excess authority rather than just
|
||||
the inviter). In any case, it was not clear how to make such a
|
||||
design satisfy the :doc:`write coordination
|
||||
directive<../../write_coordination>`, because the collective
|
||||
directory would have needed to be written to by multiple clients.
|
||||
|
||||
The reliance on a secure channel to send the invitation to its
|
||||
intended recipient is not ideal, since it may involve additional
|
||||
software such as clients for PGP, OTR, ssh etc. However, we believe
|
||||
that this complexity is necessary rather than incidental, because
|
||||
there must be some way to distinguish the intended recipient from
|
||||
potential attackers who would try to become members of the Magic
|
||||
Folder collective without authorization. By making use of existing
|
||||
channels that have likely already been set up by security-conscious
|
||||
users, we avoid reinventing the wheel or imposing substantial extra
|
||||
implementation costs.
|
||||
|
||||
The length of an invitation will be approximately the combined
|
||||
length of a Tahoe-LAFS read cap and write cap. This is several
|
||||
lines long, but still short enough to be cut-and-pasted successfully
|
||||
if care is taken. Errors in copying the invitation can be detected
|
||||
since Tahoe-LAFS cap URIs are self-authenticating.
|
||||
|
||||
The implementation of the ``tahoe`` subcommands is straightforward
|
||||
and raises no further difficult design issues.
|
@ -332,11 +332,6 @@ def storage_nodes(reactor, temp_dir, introducer, introducer_furl, flog_gatherer,
|
||||
@pytest.fixture(scope='session')
|
||||
@log_call(action_type=u"integration:alice", include_args=[], include_result=False)
|
||||
def alice(reactor, temp_dir, introducer_furl, flog_gatherer, storage_nodes, request):
|
||||
try:
|
||||
mkdir(join(temp_dir, 'magic-alice'))
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
process = pytest_twisted.blockon(
|
||||
_create_node(
|
||||
reactor, request, temp_dir, introducer_furl, flog_gatherer, "alice",
|
||||
@ -351,11 +346,6 @@ def alice(reactor, temp_dir, introducer_furl, flog_gatherer, storage_nodes, requ
|
||||
@pytest.fixture(scope='session')
|
||||
@log_call(action_type=u"integration:bob", include_args=[], include_result=False)
|
||||
def bob(reactor, temp_dir, introducer_furl, flog_gatherer, storage_nodes, request):
|
||||
try:
|
||||
mkdir(join(temp_dir, 'magic-bob'))
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
process = pytest_twisted.blockon(
|
||||
_create_node(
|
||||
reactor, request, temp_dir, introducer_furl, flog_gatherer, "bob",
|
||||
@ -367,97 +357,6 @@ def bob(reactor, temp_dir, introducer_furl, flog_gatherer, storage_nodes, reques
|
||||
return process
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
@log_call(action_type=u"integration:alice:invite", include_args=["temp_dir"])
|
||||
def alice_invite(reactor, alice, temp_dir, request):
|
||||
node_dir = join(temp_dir, 'alice')
|
||||
|
||||
with start_action(action_type=u"integration:alice:magic_folder:create"):
|
||||
# FIXME XXX by the time we see "client running" in the logs, the
|
||||
# storage servers aren't "really" ready to roll yet (uploads fairly
|
||||
# consistently fail if we don't hack in this pause...)
|
||||
proto = _CollectOutputProtocol()
|
||||
_tahoe_runner_optional_coverage(
|
||||
proto,
|
||||
reactor,
|
||||
request,
|
||||
[
|
||||
'magic-folder', 'create',
|
||||
'--poll-interval', '2',
|
||||
'--basedir', node_dir, 'magik:', 'alice',
|
||||
join(temp_dir, 'magic-alice'),
|
||||
]
|
||||
)
|
||||
pytest_twisted.blockon(proto.done)
|
||||
|
||||
with start_action(action_type=u"integration:alice:magic_folder:invite") as a:
|
||||
proto = _CollectOutputProtocol()
|
||||
_tahoe_runner_optional_coverage(
|
||||
proto,
|
||||
reactor,
|
||||
request,
|
||||
[
|
||||
'magic-folder', 'invite',
|
||||
'--basedir', node_dir, 'magik:', 'bob',
|
||||
]
|
||||
)
|
||||
pytest_twisted.blockon(proto.done)
|
||||
invite = proto.output.getvalue()
|
||||
a.add_success_fields(invite=invite)
|
||||
|
||||
with start_action(action_type=u"integration:alice:magic_folder:restart"):
|
||||
# before magic-folder works, we have to stop and restart (this is
|
||||
# crappy for the tests -- can we fix it in magic-folder?)
|
||||
try:
|
||||
alice.transport.signalProcess('TERM')
|
||||
pytest_twisted.blockon(alice.transport.exited)
|
||||
except ProcessExitedAlready:
|
||||
pass
|
||||
with start_action(action_type=u"integration:alice:magic_folder:magic-text"):
|
||||
magic_text = 'Completed initial Magic Folder scan successfully'
|
||||
pytest_twisted.blockon(_run_node(reactor, node_dir, request, magic_text))
|
||||
await_client_ready(alice)
|
||||
return invite
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
@log_call(
|
||||
action_type=u"integration:magic_folder",
|
||||
include_args=["alice_invite", "temp_dir"],
|
||||
)
|
||||
def magic_folder(reactor, alice_invite, alice, bob, temp_dir, request):
|
||||
print("pairing magic-folder")
|
||||
bob_dir = join(temp_dir, 'bob')
|
||||
proto = _CollectOutputProtocol()
|
||||
_tahoe_runner_optional_coverage(
|
||||
proto,
|
||||
reactor,
|
||||
request,
|
||||
[
|
||||
'magic-folder', 'join',
|
||||
'--poll-interval', '1',
|
||||
'--basedir', bob_dir,
|
||||
alice_invite,
|
||||
join(temp_dir, 'magic-bob'),
|
||||
]
|
||||
)
|
||||
pytest_twisted.blockon(proto.done)
|
||||
|
||||
# before magic-folder works, we have to stop and restart (this is
|
||||
# crappy for the tests -- can we fix it in magic-folder?)
|
||||
try:
|
||||
print("Sending TERM to Bob")
|
||||
bob.transport.signalProcess('TERM')
|
||||
pytest_twisted.blockon(bob.transport.exited)
|
||||
except ProcessExitedAlready:
|
||||
pass
|
||||
|
||||
magic_text = 'Completed initial Magic Folder scan successfully'
|
||||
pytest_twisted.blockon(_run_node(reactor, bob_dir, request, magic_text))
|
||||
await_client_ready(bob)
|
||||
return (join(temp_dir, 'magic-alice'), join(temp_dir, 'magic-bob'))
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def chutney(reactor, temp_dir):
|
||||
chutney_dir = join(temp_dir, 'chutney')
|
||||
|
@ -16,7 +16,3 @@ def test_create_introducer(introducer):
|
||||
|
||||
def test_create_storage(storage_nodes):
|
||||
print("Created {} storage nodes".format(len(storage_nodes)))
|
||||
|
||||
|
||||
def test_create_alice_bob_magicfolder(magic_folder):
|
||||
print("Alice and Bob have paired magic-folders")
|
||||
|
@ -1,462 +0,0 @@
|
||||
import sys
|
||||
import time
|
||||
import shutil
|
||||
from os import mkdir, unlink, utime
|
||||
from os.path import join, exists, getmtime
|
||||
|
||||
import util
|
||||
|
||||
import pytest_twisted
|
||||
|
||||
|
||||
# tests converted from check_magicfolder_smoke.py
|
||||
# see "conftest.py" for the fixtures (e.g. "magic_folder")
|
||||
|
||||
def test_eliot_logs_are_written(alice, bob, temp_dir):
|
||||
# The integration test configuration arranges for this logging
|
||||
# configuration. Verify it actually does what we want.
|
||||
#
|
||||
# The alice and bob arguments looks unused but they actually tell pytest
|
||||
# to set up all the magic-folder stuff. The assertions here are about
|
||||
# side-effects of that setup.
|
||||
assert exists(join(temp_dir, "alice", "logs", "eliot.json"))
|
||||
assert exists(join(temp_dir, "bob", "logs", "eliot.json"))
|
||||
|
||||
|
||||
def test_alice_writes_bob_receives(magic_folder):
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
with open(join(alice_dir, "first_file"), "w") as f:
|
||||
f.write("alice wrote this")
|
||||
|
||||
util.await_file_contents(join(bob_dir, "first_file"), "alice wrote this")
|
||||
return
|
||||
|
||||
|
||||
def test_alice_writes_bob_receives_multiple(magic_folder):
|
||||
"""
|
||||
When Alice does a series of updates, Bob should just receive them
|
||||
with no .backup or .conflict files being produced.
|
||||
"""
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
unwanted_files = [
|
||||
join(bob_dir, "multiple.backup"),
|
||||
join(bob_dir, "multiple.conflict")
|
||||
]
|
||||
|
||||
# first update
|
||||
with open(join(alice_dir, "multiple"), "w") as f:
|
||||
f.write("alice wrote this")
|
||||
|
||||
util.await_file_contents(
|
||||
join(bob_dir, "multiple"), "alice wrote this",
|
||||
error_if=unwanted_files,
|
||||
)
|
||||
|
||||
# second update
|
||||
with open(join(alice_dir, "multiple"), "w") as f:
|
||||
f.write("someone changed their mind")
|
||||
|
||||
util.await_file_contents(
|
||||
join(bob_dir, "multiple"), "someone changed their mind",
|
||||
error_if=unwanted_files,
|
||||
)
|
||||
|
||||
# third update
|
||||
with open(join(alice_dir, "multiple"), "w") as f:
|
||||
f.write("absolutely final version ship it")
|
||||
|
||||
util.await_file_contents(
|
||||
join(bob_dir, "multiple"), "absolutely final version ship it",
|
||||
error_if=unwanted_files,
|
||||
)
|
||||
|
||||
# forth update, but both "at once" so one should conflict
|
||||
time.sleep(2)
|
||||
with open(join(alice_dir, "multiple"), "w") as f:
|
||||
f.write("okay one more attempt")
|
||||
with open(join(bob_dir, "multiple"), "w") as f:
|
||||
f.write("...but just let me add")
|
||||
|
||||
bob_conflict = join(bob_dir, "multiple.conflict")
|
||||
alice_conflict = join(alice_dir, "multiple.conflict")
|
||||
|
||||
found = util.await_files_exist([
|
||||
bob_conflict,
|
||||
alice_conflict,
|
||||
])
|
||||
|
||||
assert len(found) > 0, "Should have found a conflict"
|
||||
print("conflict found (as expected)")
|
||||
|
||||
|
||||
def test_alice_writes_bob_receives_old_timestamp(magic_folder):
|
||||
alice_dir, bob_dir = magic_folder
|
||||
fname = join(alice_dir, "ts_file")
|
||||
ts = time.time() - (60 * 60 * 36) # 36 hours ago
|
||||
|
||||
with open(fname, "w") as f:
|
||||
f.write("alice wrote this")
|
||||
utime(fname, (time.time(), ts))
|
||||
|
||||
fname = join(bob_dir, "ts_file")
|
||||
util.await_file_contents(fname, "alice wrote this")
|
||||
# make sure the timestamp is correct
|
||||
assert int(getmtime(fname)) == int(ts)
|
||||
return
|
||||
|
||||
|
||||
def test_bob_writes_alice_receives(magic_folder):
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
with open(join(bob_dir, "second_file"), "w") as f:
|
||||
f.write("bob wrote this")
|
||||
|
||||
util.await_file_contents(join(alice_dir, "second_file"), "bob wrote this")
|
||||
return
|
||||
|
||||
|
||||
def test_alice_deletes(magic_folder):
|
||||
# alice writes a file, waits for bob to get it and then deletes it.
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
with open(join(alice_dir, "delfile"), "w") as f:
|
||||
f.write("alice wrote this")
|
||||
|
||||
util.await_file_contents(join(bob_dir, "delfile"), "alice wrote this")
|
||||
|
||||
# bob has the file; now alices deletes it
|
||||
unlink(join(alice_dir, "delfile"))
|
||||
|
||||
# bob should remove his copy, but preserve a backup
|
||||
util.await_file_vanishes(join(bob_dir, "delfile"))
|
||||
util.await_file_contents(join(bob_dir, "delfile.backup"), "alice wrote this")
|
||||
return
|
||||
|
||||
|
||||
def test_alice_creates_bob_edits(magic_folder):
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
# alice writes a file
|
||||
with open(join(alice_dir, "editfile"), "w") as f:
|
||||
f.write("alice wrote this")
|
||||
|
||||
util.await_file_contents(join(bob_dir, "editfile"), "alice wrote this")
|
||||
|
||||
# now bob edits it
|
||||
with open(join(bob_dir, "editfile"), "w") as f:
|
||||
f.write("bob says foo")
|
||||
|
||||
util.await_file_contents(join(alice_dir, "editfile"), "bob says foo")
|
||||
|
||||
|
||||
def test_bob_creates_sub_directory(magic_folder):
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
# bob makes a sub-dir, with a file in it
|
||||
mkdir(join(bob_dir, "subdir"))
|
||||
with open(join(bob_dir, "subdir", "a_file"), "w") as f:
|
||||
f.write("bob wuz here")
|
||||
|
||||
# alice gets it
|
||||
util.await_file_contents(join(alice_dir, "subdir", "a_file"), "bob wuz here")
|
||||
|
||||
# now bob deletes it again
|
||||
shutil.rmtree(join(bob_dir, "subdir"))
|
||||
|
||||
# alice should delete it as well
|
||||
util.await_file_vanishes(join(alice_dir, "subdir", "a_file"))
|
||||
# i *think* it's by design that the subdir won't disappear,
|
||||
# because a "a_file.backup" should appear...
|
||||
util.await_file_contents(join(alice_dir, "subdir", "a_file.backup"), "bob wuz here")
|
||||
|
||||
|
||||
def test_bob_creates_alice_deletes_bob_restores(magic_folder):
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
# bob creates a file
|
||||
with open(join(bob_dir, "boom"), "w") as f:
|
||||
f.write("bob wrote this")
|
||||
|
||||
util.await_file_contents(
|
||||
join(alice_dir, "boom"),
|
||||
"bob wrote this"
|
||||
)
|
||||
|
||||
# alice deletes it (so bob should as well .. but keep a backup)
|
||||
unlink(join(alice_dir, "boom"))
|
||||
util.await_file_vanishes(join(bob_dir, "boom"))
|
||||
assert exists(join(bob_dir, "boom.backup"))
|
||||
|
||||
# bob restore it, with new contents
|
||||
unlink(join(bob_dir, "boom.backup"))
|
||||
with open(join(bob_dir, "boom"), "w") as f:
|
||||
f.write("bob wrote this again, because reasons")
|
||||
|
||||
# XXX double-check this behavior is correct!
|
||||
|
||||
# alice sees bob's update, but marks it as a conflict (because
|
||||
# .. she previously deleted it? does that really make sense)
|
||||
|
||||
util.await_file_contents(
|
||||
join(alice_dir, "boom"),
|
||||
"bob wrote this again, because reasons",
|
||||
)
|
||||
|
||||
|
||||
def test_bob_creates_alice_deletes_alice_restores(magic_folder):
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
# bob creates a file
|
||||
with open(join(bob_dir, "boom2"), "w") as f:
|
||||
f.write("bob wrote this")
|
||||
|
||||
util.await_file_contents(
|
||||
join(alice_dir, "boom2"),
|
||||
"bob wrote this"
|
||||
)
|
||||
|
||||
# alice deletes it (so bob should as well)
|
||||
unlink(join(alice_dir, "boom2"))
|
||||
util.await_file_vanishes(join(bob_dir, "boom2"))
|
||||
|
||||
# alice restore it, with new contents
|
||||
with open(join(alice_dir, "boom2"), "w") as f:
|
||||
f.write("alice re-wrote this again, because reasons")
|
||||
|
||||
util.await_file_contents(
|
||||
join(bob_dir, "boom2"),
|
||||
"alice re-wrote this again, because reasons"
|
||||
)
|
||||
|
||||
|
||||
def test_bob_conflicts_with_alice_fresh(magic_folder):
|
||||
# both alice and bob make a file at "the same time".
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
# either alice or bob will "win" by uploading to the DMD first.
|
||||
with open(join(bob_dir, 'alpha'), 'w') as f0, open(join(alice_dir, 'alpha'), 'w') as f1:
|
||||
f0.write("this is bob's alpha\n")
|
||||
f1.write("this is alice's alpha\n")
|
||||
|
||||
# there should be conflicts
|
||||
_bob_conflicts_alice_await_conflicts('alpha', alice_dir, bob_dir)
|
||||
|
||||
|
||||
def test_bob_conflicts_with_alice_preexisting(magic_folder):
|
||||
# both alice and bob edit a file at "the same time" (similar to
|
||||
# above, but the file already exists before the edits)
|
||||
alice_dir, bob_dir = magic_folder
|
||||
|
||||
# have bob create the file
|
||||
with open(join(bob_dir, 'beta'), 'w') as f:
|
||||
f.write("original beta (from bob)\n")
|
||||
util.await_file_contents(join(alice_dir, 'beta'), "original beta (from bob)\n")
|
||||
|
||||
# both alice and bob now have a "beta" file, at version 0
|
||||
|
||||
# either alice or bob will "win" by uploading to the DMD first
|
||||
# (however, they should both detect a conflict)
|
||||
with open(join(bob_dir, 'beta'), 'w') as f:
|
||||
f.write("this is bob's beta\n")
|
||||
with open(join(alice_dir, 'beta'), 'w') as f:
|
||||
f.write("this is alice's beta\n")
|
||||
|
||||
# both alice and bob should see a conflict
|
||||
_bob_conflicts_alice_await_conflicts("beta", alice_dir, bob_dir)
|
||||
|
||||
|
||||
def _bob_conflicts_alice_await_conflicts(name, alice_dir, bob_dir):
|
||||
"""
|
||||
shared code between _fresh and _preexisting conflict test
|
||||
"""
|
||||
found = util.await_files_exist(
|
||||
[
|
||||
join(bob_dir, '{}.conflict'.format(name)),
|
||||
join(alice_dir, '{}.conflict'.format(name)),
|
||||
],
|
||||
)
|
||||
|
||||
assert len(found) >= 1, "should be at least one conflict"
|
||||
assert open(join(bob_dir, name), 'r').read() == "this is bob's {}\n".format(name)
|
||||
assert open(join(alice_dir, name), 'r').read() == "this is alice's {}\n".format(name)
|
||||
|
||||
alice_conflict = join(alice_dir, '{}.conflict'.format(name))
|
||||
bob_conflict = join(bob_dir, '{}.conflict'.format(name))
|
||||
if exists(bob_conflict):
|
||||
assert open(bob_conflict, 'r').read() == "this is alice's {}\n".format(name)
|
||||
if exists(alice_conflict):
|
||||
assert open(alice_conflict, 'r').read() == "this is bob's {}\n".format(name)
|
||||
|
||||
|
||||
@pytest_twisted.inlineCallbacks
|
||||
def test_edmond_uploads_then_restarts(reactor, request, temp_dir, introducer_furl, flog_gatherer, storage_nodes):
|
||||
"""
|
||||
ticket 2880: if a magic-folder client uploads something, then
|
||||
re-starts a spurious .backup file should not appear
|
||||
"""
|
||||
|
||||
edmond_dir = join(temp_dir, 'edmond')
|
||||
edmond = yield util._create_node(
|
||||
reactor, request, temp_dir, introducer_furl, flog_gatherer,
|
||||
"edmond", web_port="tcp:9985:interface=localhost",
|
||||
storage=False,
|
||||
)
|
||||
|
||||
|
||||
magic_folder = join(temp_dir, 'magic-edmond')
|
||||
mkdir(magic_folder)
|
||||
created = False
|
||||
# create a magic-folder
|
||||
# (how can we know that the grid is ready?)
|
||||
for _ in range(10): # try 10 times
|
||||
try:
|
||||
proto = util._CollectOutputProtocol()
|
||||
transport = reactor.spawnProcess(
|
||||
proto,
|
||||
sys.executable,
|
||||
[
|
||||
sys.executable, '-m', 'allmydata.scripts.runner',
|
||||
'magic-folder', 'create',
|
||||
'--poll-interval', '2',
|
||||
'--basedir', edmond_dir,
|
||||
'magik:',
|
||||
'edmond_magic',
|
||||
magic_folder,
|
||||
]
|
||||
)
|
||||
yield proto.done
|
||||
created = True
|
||||
break
|
||||
except Exception as e:
|
||||
print("failed to create magic-folder: {}".format(e))
|
||||
time.sleep(1)
|
||||
|
||||
assert created, "Didn't create a magic-folder"
|
||||
|
||||
# to actually-start the magic-folder we have to re-start
|
||||
edmond.transport.signalProcess('TERM')
|
||||
yield edmond.transport.exited
|
||||
edmond = yield util._run_node(reactor, edmond.node_dir, request, 'Completed initial Magic Folder scan successfully')
|
||||
util.await_client_ready(edmond)
|
||||
|
||||
# add a thing to the magic-folder
|
||||
with open(join(magic_folder, "its_a_file"), "w") as f:
|
||||
f.write("edmond wrote this")
|
||||
|
||||
# fixme, do status-update attempts in a loop below
|
||||
time.sleep(5)
|
||||
|
||||
# let it upload; poll the HTTP magic-folder status API until it is
|
||||
# uploaded
|
||||
from allmydata.scripts.magic_folder_cli import _get_json_for_fragment
|
||||
|
||||
with open(join(edmond_dir, u'private', u'api_auth_token'), 'rb') as f:
|
||||
token = f.read()
|
||||
|
||||
uploaded = False
|
||||
for _ in range(10):
|
||||
options = {
|
||||
"node-url": open(join(edmond_dir, u'node.url'), 'r').read().strip(),
|
||||
}
|
||||
try:
|
||||
magic_data = _get_json_for_fragment(
|
||||
options,
|
||||
'magic_folder?t=json',
|
||||
method='POST',
|
||||
post_args=dict(
|
||||
t='json',
|
||||
name='default',
|
||||
token=token,
|
||||
)
|
||||
)
|
||||
for mf in magic_data:
|
||||
if mf['status'] == u'success' and mf['path'] == u'its_a_file':
|
||||
uploaded = True
|
||||
break
|
||||
except Exception as e:
|
||||
time.sleep(1)
|
||||
|
||||
assert uploaded, "expected to upload 'its_a_file'"
|
||||
|
||||
# re-starting edmond right now would "normally" trigger the 2880 bug
|
||||
|
||||
# kill edmond
|
||||
edmond.transport.signalProcess('TERM')
|
||||
yield edmond.transport.exited
|
||||
time.sleep(1)
|
||||
edmond = yield util._run_node(reactor, edmond.node_dir, request, 'Completed initial Magic Folder scan successfully')
|
||||
util.await_client_ready(edmond)
|
||||
|
||||
# XXX how can we say for sure if we've waited long enough? look at
|
||||
# tail of logs for magic-folder ... somethingsomething?
|
||||
print("waiting 20 seconds to see if a .backup appears")
|
||||
for _ in range(20):
|
||||
assert exists(join(magic_folder, "its_a_file"))
|
||||
assert not exists(join(magic_folder, "its_a_file.backup"))
|
||||
time.sleep(1)
|
||||
|
||||
|
||||
@pytest_twisted.inlineCallbacks
|
||||
def test_alice_adds_files_while_bob_is_offline(reactor, request, temp_dir, magic_folder):
|
||||
"""
|
||||
Alice can add new files to a magic folder while Bob is offline. When Bob
|
||||
comes back online his copy is updated to reflect the new files.
|
||||
"""
|
||||
alice_magic_dir, bob_magic_dir = magic_folder
|
||||
alice_node_dir = join(temp_dir, "alice")
|
||||
bob_node_dir = join(temp_dir, "bob")
|
||||
|
||||
# Take Bob offline.
|
||||
yield util.cli(request, reactor, bob_node_dir, "stop")
|
||||
|
||||
# Create a couple files in Alice's local directory.
|
||||
some_files = list(
|
||||
(name * 3) + ".added-while-offline"
|
||||
for name
|
||||
in "xyz"
|
||||
)
|
||||
for name in some_files:
|
||||
with open(join(alice_magic_dir, name), "w") as f:
|
||||
f.write(name + " some content")
|
||||
|
||||
good = False
|
||||
for i in range(15):
|
||||
status = yield util.magic_folder_cli(request, reactor, alice_node_dir, "status")
|
||||
good = status.count(".added-while-offline (36 B): good, version=0") == len(some_files) * 2
|
||||
if good:
|
||||
# We saw each file as having a local good state and a remote good
|
||||
# state. That means we're ready to involve Bob.
|
||||
break
|
||||
else:
|
||||
time.sleep(1.0)
|
||||
|
||||
assert good, (
|
||||
"Timed out waiting for good Alice state. Last status:\n{}".format(status)
|
||||
)
|
||||
|
||||
# Start Bob up again
|
||||
magic_text = 'Completed initial Magic Folder scan successfully'
|
||||
yield util._run_node(reactor, bob_node_dir, request, magic_text)
|
||||
|
||||
yield util.await_files_exist(
|
||||
list(
|
||||
join(bob_magic_dir, name)
|
||||
for name
|
||||
in some_files
|
||||
),
|
||||
await_all=True,
|
||||
)
|
||||
# Let it settle. It would be nicer to have a readable status output we
|
||||
# could query. Parsing the current text format is more than I want to
|
||||
# deal with right now.
|
||||
time.sleep(1.0)
|
||||
conflict_files = list(name + ".conflict" for name in some_files)
|
||||
assert all(
|
||||
list(
|
||||
not exists(join(bob_magic_dir, name))
|
||||
for name
|
||||
in conflict_files
|
||||
),
|
||||
)
|
@ -14,7 +14,7 @@ import pytest_twisted
|
||||
|
||||
import util
|
||||
|
||||
# see "conftest.py" for the fixtures (e.g. "magic_folder")
|
||||
# see "conftest.py" for the fixtures (e.g. "tor_network")
|
||||
|
||||
@pytest_twisted.inlineCallbacks
|
||||
def test_onion_service_storage(reactor, request, temp_dir, flog_gatherer, tor_network, tor_introducer_furl):
|
||||
|
@ -498,7 +498,3 @@ def await_client_ready(tahoe, timeout=10, liveness=60*2):
|
||||
tahoe,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def magic_folder_cli(request, reactor, node_dir, *argv):
|
||||
return cli(request, reactor, node_dir, "magic-folder", *argv)
|
||||
|
1
newsfragments/3284.removed
Normal file
1
newsfragments/3284.removed
Normal file
@ -0,0 +1 @@
|
||||
The Magic Folder frontend has been split out into a stand-alone project. The functionality is no longer part of Tahoe-LAFS itself. Learn more at <https://github.com/LeastAuthority/magic-folder>.
|
0
newsfragments/3299.minor
Normal file
0
newsfragments/3299.minor
Normal file
0
newsfragments/3300.minor
Normal file
0
newsfragments/3300.minor
Normal file
0
newsfragments/3302.minor
Normal file
0
newsfragments/3302.minor
Normal file
0
newsfragments/3303.minor
Normal file
0
newsfragments/3303.minor
Normal file
@ -12,15 +12,14 @@ buildPythonPackage rec {
|
||||
postPatch = ''
|
||||
substituteInPlace setup.py \
|
||||
--replace "boltons >= 19.0.1" boltons
|
||||
# depends on eliot.prettyprint._main which we don't have here.
|
||||
rm eliot/tests/test_prettyprint.py
|
||||
|
||||
# Fails intermittently.
|
||||
substituteInPlace eliot/tests/test_validation.py \
|
||||
--replace "def test_omitLoggerFromActionType" "def xtest_omitLoggerFromActionType" \
|
||||
--replace "def test_logCallsDefaultLoggerWrite" "def xtest_logCallsDefaultLoggerWrite"
|
||||
'';
|
||||
|
||||
# A seemingly random subset of the test suite fails intermittently. After
|
||||
# Tahoe-LAFS is ported to Python 3 we can update to a newer Eliot and, if
|
||||
# the test suite continues to fail, maybe it will be more likely that we can
|
||||
# have upstream fix it for us.
|
||||
doCheck = false;
|
||||
|
||||
checkInputs = [ testtools pytest hypothesis ];
|
||||
propagatedBuildInputs = [ zope_interface pyrsistent boltons ];
|
||||
|
||||
|
5
setup.py
5
setup.py
@ -62,9 +62,7 @@ install_requires = [
|
||||
# version of cryptography will *really* be installed.
|
||||
"cryptography >= 2.6",
|
||||
|
||||
# * On Linux we need at least Twisted 10.1.0 for inotify support
|
||||
# used by the drop-upload frontend.
|
||||
# * We also need Twisted 10.1.0 for the FTP frontend in order for
|
||||
# * We need Twisted 10.1.0 for the FTP frontend in order for
|
||||
# Twisted's FTP server to support asynchronous close.
|
||||
# * The SFTP frontend depends on Twisted 11.0.0 to fix the SSH server
|
||||
# rekeying bug <https://twistedmatrix.com/trac/ticket/4395>
|
||||
@ -354,7 +352,6 @@ setup(name="tahoe-lafs", # also set in __init__.py
|
||||
# https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2392 for some
|
||||
# discussion.
|
||||
':sys_platform=="win32"': ["pywin32 != 226"],
|
||||
':sys_platform!="win32" and sys_platform!="linux2"': ["watchdog"], # For magic-folder on "darwin" (macOS) and the BSDs
|
||||
"test": [
|
||||
# Pin a specific pyflakes so we don't have different folks
|
||||
# disagreeing on what is or is not a lint issue. We can bump
|
||||
|
@ -85,9 +85,6 @@ _client_config = configutil.ValidConfiguration(
|
||||
"stats_gatherer.furl",
|
||||
"storage.plugins",
|
||||
),
|
||||
"drop_upload": ( # deprecated already?
|
||||
"enabled",
|
||||
),
|
||||
"ftpd": (
|
||||
"accounts.file",
|
||||
"accounts.url",
|
||||
@ -121,12 +118,6 @@ _client_config = configutil.ValidConfiguration(
|
||||
"helper": (
|
||||
"enabled",
|
||||
),
|
||||
"magic_folder": (
|
||||
"download.umask",
|
||||
"enabled",
|
||||
"local.directory",
|
||||
"poll_interval",
|
||||
),
|
||||
},
|
||||
is_valid_section=_is_valid_section,
|
||||
# Anything in a valid section is a valid item, for now.
|
||||
@ -681,7 +672,6 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
"""
|
||||
node.Node.__init__(self, config, main_tub, control_tub, i2p_provider, tor_provider)
|
||||
|
||||
self._magic_folders = dict()
|
||||
self.started_timestamp = time.time()
|
||||
self.logSource = "Client"
|
||||
self.encoding_params = self.DEFAULT_ENCODING_PARAMETERS.copy()
|
||||
@ -707,7 +697,6 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
self.init_helper()
|
||||
self.init_ftp_server()
|
||||
self.init_sftp_server()
|
||||
self.init_magic_folder()
|
||||
|
||||
# If the node sees an exit_trigger file, it will poll every second to see
|
||||
# whether the file still exists, and what its mtime is. If the file does not
|
||||
@ -968,9 +957,6 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
This returns a local authentication token, which is just some
|
||||
random data in "api_auth_token" which must be echoed to API
|
||||
calls.
|
||||
|
||||
Currently only the URI '/magic' for magic-folder status; other
|
||||
endpoints are invited to include this as well, as appropriate.
|
||||
"""
|
||||
return self.config.get_private_config('api_auth_token')
|
||||
|
||||
@ -1088,40 +1074,6 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
sftp_portstr, pubkey_file, privkey_file)
|
||||
s.setServiceParent(self)
|
||||
|
||||
def init_magic_folder(self):
|
||||
#print "init_magic_folder"
|
||||
if self.config.get_config("drop_upload", "enabled", False, boolean=True):
|
||||
raise node.OldConfigOptionError(
|
||||
"The [drop_upload] section must be renamed to [magic_folder].\n"
|
||||
"See docs/frontends/magic-folder.rst for more information."
|
||||
)
|
||||
|
||||
if self.config.get_config("magic_folder", "enabled", False, boolean=True):
|
||||
from allmydata.frontends import magic_folder
|
||||
|
||||
try:
|
||||
magic_folders = magic_folder.load_magic_folders(self.config._basedir)
|
||||
except Exception as e:
|
||||
log.msg("Error loading magic-folder config: {}".format(e))
|
||||
raise
|
||||
|
||||
# start processing the upload queue when we've connected to
|
||||
# enough servers
|
||||
threshold = min(self.encoding_params["k"],
|
||||
self.encoding_params["happy"] + 1)
|
||||
|
||||
for (name, mf_config) in magic_folders.items():
|
||||
self.log("Starting magic_folder '{}'".format(name))
|
||||
s = magic_folder.MagicFolder.from_config(self, name, mf_config)
|
||||
self._magic_folders[name] = s
|
||||
s.setServiceParent(self)
|
||||
|
||||
connected_d = self.storage_broker.when_connected_enough(threshold)
|
||||
def connected_enough(ign, mf):
|
||||
mf.ready() # returns a Deferred we ignore
|
||||
return None
|
||||
connected_d.addCallback(connected_enough, s)
|
||||
|
||||
def _check_exit_trigger(self, exit_trigger_file):
|
||||
if os.path.exists(exit_trigger_file):
|
||||
mtime = os.stat(exit_trigger_file)[stat.ST_MTIME]
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,204 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import sys
|
||||
from collections import namedtuple
|
||||
|
||||
from allmydata.util.dbutil import get_db, DBError
|
||||
from allmydata.util.eliotutil import (
|
||||
RELPATH,
|
||||
VERSION,
|
||||
LAST_UPLOADED_URI,
|
||||
LAST_DOWNLOADED_URI,
|
||||
LAST_DOWNLOADED_TIMESTAMP,
|
||||
PATHINFO,
|
||||
validateSetMembership,
|
||||
validateInstanceOf,
|
||||
)
|
||||
from eliot import (
|
||||
Field,
|
||||
ActionType,
|
||||
)
|
||||
|
||||
PathEntry = namedtuple('PathEntry', 'size mtime_ns ctime_ns version last_uploaded_uri '
|
||||
'last_downloaded_uri last_downloaded_timestamp')
|
||||
|
||||
PATHENTRY = Field(
|
||||
u"pathentry",
|
||||
lambda v: None if v is None else {
|
||||
"size": v.size,
|
||||
"mtime_ns": v.mtime_ns,
|
||||
"ctime_ns": v.ctime_ns,
|
||||
"version": v.version,
|
||||
"last_uploaded_uri": v.last_uploaded_uri,
|
||||
"last_downloaded_uri": v.last_downloaded_uri,
|
||||
"last_downloaded_timestamp": v.last_downloaded_timestamp,
|
||||
},
|
||||
u"The local database state of a file.",
|
||||
validateInstanceOf((type(None), PathEntry)),
|
||||
)
|
||||
|
||||
_INSERT_OR_UPDATE = Field.for_types(
|
||||
u"insert_or_update",
|
||||
[unicode],
|
||||
u"An indication of whether the record for this upload was new or an update to a previous entry.",
|
||||
validateSetMembership({u"insert", u"update"}),
|
||||
)
|
||||
|
||||
UPDATE_ENTRY = ActionType(
|
||||
u"magic-folder-db:update-entry",
|
||||
[RELPATH, VERSION, LAST_UPLOADED_URI, LAST_DOWNLOADED_URI, LAST_DOWNLOADED_TIMESTAMP, PATHINFO],
|
||||
[_INSERT_OR_UPDATE],
|
||||
u"Record some metadata about a relative path in the magic-folder.",
|
||||
)
|
||||
|
||||
|
||||
# magic-folder db schema version 1
|
||||
SCHEMA_v1 = """
|
||||
CREATE TABLE version
|
||||
(
|
||||
version INTEGER -- contains one row, set to 1
|
||||
);
|
||||
|
||||
CREATE TABLE local_files
|
||||
(
|
||||
path VARCHAR(1024) PRIMARY KEY, -- UTF-8 filename relative to local magic folder dir
|
||||
size INTEGER, -- ST_SIZE, or NULL if the file has been deleted
|
||||
mtime_ns INTEGER, -- ST_MTIME in nanoseconds
|
||||
ctime_ns INTEGER, -- ST_CTIME in nanoseconds
|
||||
version INTEGER,
|
||||
last_uploaded_uri VARCHAR(256), -- URI:CHK:...
|
||||
last_downloaded_uri VARCHAR(256), -- URI:CHK:...
|
||||
last_downloaded_timestamp TIMESTAMP
|
||||
);
|
||||
"""
|
||||
|
||||
|
||||
def get_magicfolderdb(dbfile, stderr=sys.stderr,
|
||||
create_version=(SCHEMA_v1, 1), just_create=False):
|
||||
# Open or create the given backupdb file. The parent directory must
|
||||
# exist.
|
||||
try:
|
||||
(sqlite3, db) = get_db(dbfile, stderr, create_version,
|
||||
just_create=just_create, dbname="magicfolderdb")
|
||||
if create_version[1] in (1, 2):
|
||||
return MagicFolderDB(sqlite3, db)
|
||||
else:
|
||||
print("invalid magicfolderdb schema version specified", file=stderr)
|
||||
return None
|
||||
except DBError as e:
|
||||
print(e, file=stderr)
|
||||
return None
|
||||
|
||||
class LocalPath(object):
|
||||
@classmethod
|
||||
def fromrow(self, row):
|
||||
p = LocalPath()
|
||||
p.relpath_u = row[0]
|
||||
p.entry = PathEntry(*row[1:])
|
||||
return p
|
||||
|
||||
|
||||
class MagicFolderDB(object):
|
||||
VERSION = 1
|
||||
|
||||
def __init__(self, sqlite_module, connection):
|
||||
self.sqlite_module = sqlite_module
|
||||
self.connection = connection
|
||||
self.cursor = connection.cursor()
|
||||
|
||||
def close(self):
|
||||
self.connection.close()
|
||||
|
||||
def get_db_entry(self, relpath_u):
|
||||
"""
|
||||
Retrieve the entry in the database for a given path, or return None
|
||||
if there is no such entry.
|
||||
"""
|
||||
c = self.cursor
|
||||
c.execute("SELECT size, mtime_ns, ctime_ns, version, last_uploaded_uri,"
|
||||
" last_downloaded_uri, last_downloaded_timestamp"
|
||||
" FROM local_files"
|
||||
" WHERE path=?",
|
||||
(relpath_u,))
|
||||
row = self.cursor.fetchone()
|
||||
if not row:
|
||||
return None
|
||||
else:
|
||||
(size, mtime_ns, ctime_ns, version, last_uploaded_uri,
|
||||
last_downloaded_uri, last_downloaded_timestamp) = row
|
||||
return PathEntry(size=size, mtime_ns=mtime_ns, ctime_ns=ctime_ns, version=version,
|
||||
last_uploaded_uri=last_uploaded_uri,
|
||||
last_downloaded_uri=last_downloaded_uri,
|
||||
last_downloaded_timestamp=last_downloaded_timestamp)
|
||||
|
||||
def get_direct_children(self, relpath_u):
|
||||
"""
|
||||
Given the relative path to a directory, return ``LocalPath`` instances
|
||||
representing all direct children of that directory.
|
||||
"""
|
||||
# It would be great to not be interpolating data into query
|
||||
# statements. However, query parameters are not supported in the
|
||||
# position where we need them.
|
||||
sqlitesafe_relpath_u = relpath_u.replace(u"'", u"''")
|
||||
statement = (
|
||||
"""
|
||||
SELECT
|
||||
path, size, mtime_ns, ctime_ns, version, last_uploaded_uri,
|
||||
last_downloaded_uri, last_downloaded_timestamp
|
||||
FROM
|
||||
local_files
|
||||
WHERE
|
||||
-- The "_" used here ensures there is at least one character
|
||||
-- after the /. This prevents matching the path itself.
|
||||
path LIKE '{path}/_%' AND
|
||||
|
||||
-- The "_" used here serves a similar purpose. This allows
|
||||
-- matching directory children but avoids matching their
|
||||
-- children.
|
||||
path NOT LIKE '{path}/_%/_%'
|
||||
"""
|
||||
).format(path=sqlitesafe_relpath_u)
|
||||
|
||||
self.cursor.execute(statement)
|
||||
rows = self.cursor.fetchall()
|
||||
return list(
|
||||
LocalPath.fromrow(row)
|
||||
for row
|
||||
in rows
|
||||
)
|
||||
|
||||
def get_all_relpaths(self):
|
||||
"""
|
||||
Retrieve a set of all relpaths of files that have had an entry in magic folder db
|
||||
(i.e. that have been downloaded at least once).
|
||||
"""
|
||||
self.cursor.execute("SELECT path FROM local_files")
|
||||
rows = self.cursor.fetchall()
|
||||
return set([r[0] for r in rows])
|
||||
|
||||
def did_upload_version(self, relpath_u, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp, pathinfo):
|
||||
action = UPDATE_ENTRY(
|
||||
relpath=relpath_u,
|
||||
version=version,
|
||||
last_uploaded_uri=last_uploaded_uri,
|
||||
last_downloaded_uri=last_downloaded_uri,
|
||||
last_downloaded_timestamp=last_downloaded_timestamp,
|
||||
pathinfo=pathinfo,
|
||||
)
|
||||
with action:
|
||||
try:
|
||||
self.cursor.execute("INSERT INTO local_files VALUES (?,?,?,?,?,?,?,?)",
|
||||
(relpath_u, pathinfo.size, pathinfo.mtime_ns, pathinfo.ctime_ns,
|
||||
version, last_uploaded_uri, last_downloaded_uri,
|
||||
last_downloaded_timestamp))
|
||||
action.add_success_fields(insert_or_update=u"insert")
|
||||
except (self.sqlite_module.IntegrityError, self.sqlite_module.OperationalError):
|
||||
self.cursor.execute("UPDATE local_files"
|
||||
" SET size=?, mtime_ns=?, ctime_ns=?, version=?, last_uploaded_uri=?,"
|
||||
" last_downloaded_uri=?, last_downloaded_timestamp=?"
|
||||
" WHERE path=?",
|
||||
(pathinfo.size, pathinfo.mtime_ns, pathinfo.ctime_ns, version,
|
||||
last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp,
|
||||
relpath_u))
|
||||
action.add_success_fields(insert_or_update=u"update")
|
||||
self.connection.commit()
|
@ -1,32 +0,0 @@
|
||||
import re
|
||||
import os.path
|
||||
|
||||
from allmydata.util.assertutil import precondition, _assert
|
||||
|
||||
def path2magic(path):
|
||||
return re.sub(u'[/@]', lambda m: {u'/': u'@_', u'@': u'@@'}[m.group(0)], path)
|
||||
|
||||
def magic2path(path):
|
||||
return re.sub(u'@[_@]', lambda m: {u'@_': u'/', u'@@': u'@'}[m.group(0)], path)
|
||||
|
||||
|
||||
IGNORE_SUFFIXES = [u'.backup', u'.tmp', u'.conflict']
|
||||
IGNORE_PREFIXES = [u'.']
|
||||
|
||||
def should_ignore_file(path_u):
|
||||
precondition(isinstance(path_u, unicode), path_u=path_u)
|
||||
|
||||
for suffix in IGNORE_SUFFIXES:
|
||||
if path_u.endswith(suffix):
|
||||
return True
|
||||
|
||||
while path_u != u"":
|
||||
oldpath_u = path_u
|
||||
path_u, tail_u = os.path.split(path_u)
|
||||
if tail_u.startswith(u"."):
|
||||
return True
|
||||
if path_u == oldpath_u:
|
||||
return True # the path was absolute
|
||||
_assert(len(path_u) < len(oldpath_u), path_u=path_u, oldpath_u=oldpath_u)
|
||||
|
||||
return False
|
@ -1,610 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
import urllib
|
||||
from types import NoneType
|
||||
from six.moves import cStringIO as StringIO
|
||||
from datetime import datetime
|
||||
import json
|
||||
|
||||
|
||||
from twisted.python import usage
|
||||
|
||||
from allmydata.util.assertutil import precondition
|
||||
|
||||
from .common import BaseOptions, BasedirOptions, get_aliases
|
||||
from .cli import MakeDirectoryOptions, LnOptions, CreateAliasOptions
|
||||
import tahoe_mv
|
||||
from allmydata.util.encodingutil import argv_to_abspath, argv_to_unicode, to_str, \
|
||||
quote_local_unicode_path
|
||||
from allmydata.scripts.common_http import do_http, BadResponse
|
||||
from allmydata.util import fileutil
|
||||
from allmydata import uri
|
||||
from allmydata.util.abbreviate import abbreviate_space, abbreviate_time
|
||||
from allmydata.frontends.magic_folder import load_magic_folders
|
||||
from allmydata.frontends.magic_folder import save_magic_folders
|
||||
from allmydata.frontends.magic_folder import maybe_upgrade_magic_folders
|
||||
|
||||
|
||||
INVITE_SEPARATOR = "+"
|
||||
|
||||
class CreateOptions(BasedirOptions):
|
||||
nickname = None # NOTE: *not* the "name of this magic-folder"
|
||||
local_dir = None
|
||||
synopsis = "MAGIC_ALIAS: [NICKNAME LOCAL_DIR]"
|
||||
optParameters = [
|
||||
("poll-interval", "p", "60", "How often to ask for updates"),
|
||||
("name", "n", "default", "The name of this magic-folder"),
|
||||
]
|
||||
description = (
|
||||
"Create a new magic-folder. If you specify NICKNAME and "
|
||||
"LOCAL_DIR, this client will also be invited and join "
|
||||
"using the given nickname. A new alias (see 'tahoe list-aliases') "
|
||||
"will be added with the master folder's writecap."
|
||||
)
|
||||
|
||||
def parseArgs(self, alias, nickname=None, local_dir=None):
|
||||
BasedirOptions.parseArgs(self)
|
||||
alias = argv_to_unicode(alias)
|
||||
if not alias.endswith(u':'):
|
||||
raise usage.UsageError("An alias must end with a ':' character.")
|
||||
self.alias = alias[:-1]
|
||||
self.nickname = None if nickname is None else argv_to_unicode(nickname)
|
||||
try:
|
||||
if int(self['poll-interval']) <= 0:
|
||||
raise ValueError("should be positive")
|
||||
except ValueError:
|
||||
raise usage.UsageError(
|
||||
"--poll-interval must be a positive integer"
|
||||
)
|
||||
|
||||
# Expand the path relative to the current directory of the CLI command, not the node.
|
||||
self.local_dir = None if local_dir is None else argv_to_abspath(local_dir, long_path=False)
|
||||
|
||||
if self.nickname and not self.local_dir:
|
||||
raise usage.UsageError("If NICKNAME is specified then LOCAL_DIR must also be specified.")
|
||||
node_url_file = os.path.join(self['node-directory'], u"node.url")
|
||||
self['node-url'] = fileutil.read(node_url_file).strip()
|
||||
|
||||
def _delegate_options(source_options, target_options):
|
||||
target_options.aliases = get_aliases(source_options['node-directory'])
|
||||
target_options["node-url"] = source_options["node-url"]
|
||||
target_options["node-directory"] = source_options["node-directory"]
|
||||
target_options["name"] = source_options["name"]
|
||||
target_options.stdin = StringIO("")
|
||||
target_options.stdout = StringIO()
|
||||
target_options.stderr = StringIO()
|
||||
return target_options
|
||||
|
||||
def create(options):
|
||||
precondition(isinstance(options.alias, unicode), alias=options.alias)
|
||||
precondition(isinstance(options.nickname, (unicode, NoneType)), nickname=options.nickname)
|
||||
precondition(isinstance(options.local_dir, (unicode, NoneType)), local_dir=options.local_dir)
|
||||
|
||||
# make sure we don't already have a magic-folder with this name before we create the alias
|
||||
maybe_upgrade_magic_folders(options["node-directory"])
|
||||
folders = load_magic_folders(options["node-directory"])
|
||||
if options['name'] in folders:
|
||||
print("Already have a magic-folder named '{}'".format(options['name']), file=options.stderr)
|
||||
return 1
|
||||
|
||||
# create an alias; this basically just remembers the cap for the
|
||||
# master directory
|
||||
from allmydata.scripts import tahoe_add_alias
|
||||
create_alias_options = _delegate_options(options, CreateAliasOptions())
|
||||
create_alias_options.alias = options.alias
|
||||
|
||||
rc = tahoe_add_alias.create_alias(create_alias_options)
|
||||
if rc != 0:
|
||||
print(create_alias_options.stderr.getvalue(), file=options.stderr)
|
||||
return rc
|
||||
print(create_alias_options.stdout.getvalue(), file=options.stdout)
|
||||
|
||||
if options.nickname is not None:
|
||||
print(u"Inviting myself as client '{}':".format(options.nickname), file=options.stdout)
|
||||
invite_options = _delegate_options(options, InviteOptions())
|
||||
invite_options.alias = options.alias
|
||||
invite_options.nickname = options.nickname
|
||||
invite_options['name'] = options['name']
|
||||
rc = invite(invite_options)
|
||||
if rc != 0:
|
||||
print(u"magic-folder: failed to invite after create\n", file=options.stderr)
|
||||
print(invite_options.stderr.getvalue(), file=options.stderr)
|
||||
return rc
|
||||
invite_code = invite_options.stdout.getvalue().strip()
|
||||
print(u" created invite code", file=options.stdout)
|
||||
join_options = _delegate_options(options, JoinOptions())
|
||||
join_options['poll-interval'] = options['poll-interval']
|
||||
join_options.nickname = options.nickname
|
||||
join_options.local_dir = options.local_dir
|
||||
join_options.invite_code = invite_code
|
||||
rc = join(join_options)
|
||||
if rc != 0:
|
||||
print(u"magic-folder: failed to join after create\n", file=options.stderr)
|
||||
print(join_options.stderr.getvalue(), file=options.stderr)
|
||||
return rc
|
||||
print(u" joined new magic-folder", file=options.stdout)
|
||||
print(
|
||||
u"Successfully created magic-folder '{}' with alias '{}:' "
|
||||
u"and client '{}'\nYou must re-start your node before the "
|
||||
u"magic-folder will be active."
|
||||
.format(options['name'], options.alias, options.nickname), file=options.stdout)
|
||||
return 0
|
||||
|
||||
|
||||
class ListOptions(BasedirOptions):
|
||||
description = (
|
||||
"List all magic-folders this client has joined"
|
||||
)
|
||||
optFlags = [
|
||||
("json", "", "Produce JSON output")
|
||||
]
|
||||
|
||||
|
||||
def list_(options):
|
||||
folders = load_magic_folders(options["node-directory"])
|
||||
if options["json"]:
|
||||
_list_json(options, folders)
|
||||
return 0
|
||||
_list_human(options, folders)
|
||||
return 0
|
||||
|
||||
|
||||
def _list_json(options, folders):
|
||||
"""
|
||||
List our magic-folders using JSON
|
||||
"""
|
||||
info = dict()
|
||||
for name, details in folders.items():
|
||||
info[name] = {
|
||||
u"directory": details["directory"],
|
||||
}
|
||||
print(json.dumps(info), file=options.stdout)
|
||||
return 0
|
||||
|
||||
|
||||
def _list_human(options, folders):
|
||||
"""
|
||||
List our magic-folders for a human user
|
||||
"""
|
||||
if folders:
|
||||
print("This client has the following magic-folders:", file=options.stdout)
|
||||
biggest = max([len(nm) for nm in folders.keys()])
|
||||
fmt = " {:>%d}: {}" % (biggest, )
|
||||
for name, details in folders.items():
|
||||
print(fmt.format(name, details["directory"]), file=options.stdout)
|
||||
else:
|
||||
print("No magic-folders", file=options.stdout)
|
||||
|
||||
|
||||
class InviteOptions(BasedirOptions):
|
||||
nickname = None
|
||||
synopsis = "MAGIC_ALIAS: NICKNAME"
|
||||
stdin = StringIO("")
|
||||
optParameters = [
|
||||
("name", "n", "default", "The name of this magic-folder"),
|
||||
]
|
||||
description = (
|
||||
"Invite a new participant to a given magic-folder. The resulting "
|
||||
"invite-code that is printed is secret information and MUST be "
|
||||
"transmitted securely to the invitee."
|
||||
)
|
||||
|
||||
def parseArgs(self, alias, nickname=None):
|
||||
BasedirOptions.parseArgs(self)
|
||||
alias = argv_to_unicode(alias)
|
||||
if not alias.endswith(u':'):
|
||||
raise usage.UsageError("An alias must end with a ':' character.")
|
||||
self.alias = alias[:-1]
|
||||
self.nickname = argv_to_unicode(nickname)
|
||||
node_url_file = os.path.join(self['node-directory'], u"node.url")
|
||||
self['node-url'] = open(node_url_file, "r").read().strip()
|
||||
aliases = get_aliases(self['node-directory'])
|
||||
self.aliases = aliases
|
||||
|
||||
|
||||
def invite(options):
|
||||
precondition(isinstance(options.alias, unicode), alias=options.alias)
|
||||
precondition(isinstance(options.nickname, unicode), nickname=options.nickname)
|
||||
|
||||
from allmydata.scripts import tahoe_mkdir
|
||||
mkdir_options = _delegate_options(options, MakeDirectoryOptions())
|
||||
mkdir_options.where = None
|
||||
|
||||
rc = tahoe_mkdir.mkdir(mkdir_options)
|
||||
if rc != 0:
|
||||
print("magic-folder: failed to mkdir\n", file=options.stderr)
|
||||
return rc
|
||||
|
||||
# FIXME this assumes caps are ASCII.
|
||||
dmd_write_cap = mkdir_options.stdout.getvalue().strip()
|
||||
dmd_readonly_cap = uri.from_string(dmd_write_cap).get_readonly().to_string()
|
||||
if dmd_readonly_cap is None:
|
||||
print("magic-folder: failed to diminish dmd write cap\n", file=options.stderr)
|
||||
return 1
|
||||
|
||||
magic_write_cap = get_aliases(options["node-directory"])[options.alias]
|
||||
magic_readonly_cap = uri.from_string(magic_write_cap).get_readonly().to_string()
|
||||
|
||||
# tahoe ln CLIENT_READCAP COLLECTIVE_WRITECAP/NICKNAME
|
||||
ln_options = _delegate_options(options, LnOptions())
|
||||
ln_options.from_file = unicode(dmd_readonly_cap, 'utf-8')
|
||||
ln_options.to_file = u"%s/%s" % (unicode(magic_write_cap, 'utf-8'), options.nickname)
|
||||
rc = tahoe_mv.mv(ln_options, mode="link")
|
||||
if rc != 0:
|
||||
print("magic-folder: failed to create link\n", file=options.stderr)
|
||||
print(ln_options.stderr.getvalue(), file=options.stderr)
|
||||
return rc
|
||||
|
||||
# FIXME: this assumes caps are ASCII.
|
||||
print("%s%s%s" % (magic_readonly_cap, INVITE_SEPARATOR, dmd_write_cap), file=options.stdout)
|
||||
return 0
|
||||
|
||||
class JoinOptions(BasedirOptions):
|
||||
synopsis = "INVITE_CODE LOCAL_DIR"
|
||||
dmd_write_cap = ""
|
||||
magic_readonly_cap = ""
|
||||
optParameters = [
|
||||
("poll-interval", "p", "60", "How often to ask for updates"),
|
||||
("name", "n", "default", "Name of the magic-folder"),
|
||||
]
|
||||
|
||||
def parseArgs(self, invite_code, local_dir):
|
||||
BasedirOptions.parseArgs(self)
|
||||
|
||||
try:
|
||||
if int(self['poll-interval']) <= 0:
|
||||
raise ValueError("should be positive")
|
||||
except ValueError:
|
||||
raise usage.UsageError(
|
||||
"--poll-interval must be a positive integer"
|
||||
)
|
||||
# Expand the path relative to the current directory of the CLI command, not the node.
|
||||
self.local_dir = None if local_dir is None else argv_to_abspath(local_dir, long_path=False)
|
||||
self.invite_code = to_str(argv_to_unicode(invite_code))
|
||||
|
||||
def join(options):
|
||||
fields = options.invite_code.split(INVITE_SEPARATOR)
|
||||
if len(fields) != 2:
|
||||
raise usage.UsageError("Invalid invite code.")
|
||||
magic_readonly_cap, dmd_write_cap = fields
|
||||
|
||||
maybe_upgrade_magic_folders(options["node-directory"])
|
||||
existing_folders = load_magic_folders(options["node-directory"])
|
||||
|
||||
if options['name'] in existing_folders:
|
||||
print("This client already has a magic-folder named '{}'".format(options['name']), file=options.stderr)
|
||||
return 1
|
||||
|
||||
db_fname = os.path.join(
|
||||
options["node-directory"],
|
||||
u"private",
|
||||
u"magicfolder_{}.sqlite".format(options['name']),
|
||||
)
|
||||
if os.path.exists(db_fname):
|
||||
print("Database '{}' already exists; not overwriting".format(db_fname), file=options.stderr)
|
||||
return 1
|
||||
|
||||
folder = {
|
||||
u"directory": options.local_dir.encode('utf-8'),
|
||||
u"collective_dircap": magic_readonly_cap,
|
||||
u"upload_dircap": dmd_write_cap,
|
||||
u"poll_interval": options["poll-interval"],
|
||||
}
|
||||
existing_folders[options["name"]] = folder
|
||||
|
||||
save_magic_folders(options["node-directory"], existing_folders)
|
||||
return 0
|
||||
|
||||
|
||||
class LeaveOptions(BasedirOptions):
|
||||
synopsis = "Remove a magic-folder and forget all state"
|
||||
optParameters = [
|
||||
("name", "n", "default", "Name of magic-folder to leave"),
|
||||
]
|
||||
|
||||
|
||||
def leave(options):
|
||||
from ConfigParser import SafeConfigParser
|
||||
|
||||
existing_folders = load_magic_folders(options["node-directory"])
|
||||
|
||||
if not existing_folders:
|
||||
print("No magic-folders at all", file=options.stderr)
|
||||
return 1
|
||||
|
||||
if options["name"] not in existing_folders:
|
||||
print("No such magic-folder '{}'".format(options["name"]), file=options.stderr)
|
||||
return 1
|
||||
|
||||
privdir = os.path.join(options["node-directory"], u"private")
|
||||
db_fname = os.path.join(privdir, u"magicfolder_{}.sqlite".format(options["name"]))
|
||||
|
||||
# delete from YAML file and re-write it
|
||||
del existing_folders[options["name"]]
|
||||
save_magic_folders(options["node-directory"], existing_folders)
|
||||
|
||||
# delete the database file
|
||||
try:
|
||||
fileutil.remove(db_fname)
|
||||
except Exception as e:
|
||||
print("Warning: unable to remove %s due to %s: %s"
|
||||
% (quote_local_unicode_path(db_fname), e.__class__.__name__, str(e)), file=options.stderr)
|
||||
|
||||
# if this was the last magic-folder, disable them entirely
|
||||
if not existing_folders:
|
||||
parser = SafeConfigParser()
|
||||
parser.read(os.path.join(options["node-directory"], u"tahoe.cfg"))
|
||||
parser.remove_section("magic_folder")
|
||||
with open(os.path.join(options["node-directory"], u"tahoe.cfg"), "w") as f:
|
||||
parser.write(f)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
class StatusOptions(BasedirOptions):
|
||||
synopsis = ""
|
||||
stdin = StringIO("")
|
||||
optParameters = [
|
||||
("name", "n", "default", "Name for the magic-folder to show status"),
|
||||
]
|
||||
|
||||
def parseArgs(self):
|
||||
BasedirOptions.parseArgs(self)
|
||||
node_url_file = os.path.join(self['node-directory'], u"node.url")
|
||||
with open(node_url_file, "r") as f:
|
||||
self['node-url'] = f.read().strip()
|
||||
|
||||
|
||||
def _get_json_for_fragment(options, fragment, method='GET', post_args=None):
|
||||
nodeurl = options['node-url']
|
||||
if nodeurl.endswith('/'):
|
||||
nodeurl = nodeurl[:-1]
|
||||
|
||||
url = u'%s/%s' % (nodeurl, fragment)
|
||||
if method == 'POST':
|
||||
if post_args is None:
|
||||
raise ValueError("Must pass post_args= for POST method")
|
||||
body = urllib.urlencode(post_args)
|
||||
else:
|
||||
body = ''
|
||||
if post_args is not None:
|
||||
raise ValueError("post_args= only valid for POST method")
|
||||
resp = do_http(method, url, body=body)
|
||||
if isinstance(resp, BadResponse):
|
||||
# specifically NOT using format_http_error() here because the
|
||||
# URL is pretty sensitive (we're doing /uri/<key>).
|
||||
raise RuntimeError(
|
||||
"Failed to get json from '%s': %s" % (nodeurl, resp.error)
|
||||
)
|
||||
|
||||
data = resp.read()
|
||||
parsed = json.loads(data)
|
||||
if parsed is None:
|
||||
raise RuntimeError("No data from '%s'" % (nodeurl,))
|
||||
return parsed
|
||||
|
||||
|
||||
def _get_json_for_cap(options, cap):
|
||||
return _get_json_for_fragment(
|
||||
options,
|
||||
'uri/%s?t=json' % urllib.quote(cap),
|
||||
)
|
||||
|
||||
def _print_item_status(item, now, longest):
|
||||
paddedname = (' ' * (longest - len(item['path']))) + item['path']
|
||||
if 'failure_at' in item:
|
||||
ts = datetime.fromtimestamp(item['started_at'])
|
||||
prog = 'Failed %s (%s)' % (abbreviate_time(now - ts), ts)
|
||||
elif item['percent_done'] < 100.0:
|
||||
if 'started_at' not in item:
|
||||
prog = 'not yet started'
|
||||
else:
|
||||
so_far = now - datetime.fromtimestamp(item['started_at'])
|
||||
if so_far.seconds > 0.0:
|
||||
rate = item['percent_done'] / so_far.seconds
|
||||
if rate != 0:
|
||||
time_left = (100.0 - item['percent_done']) / rate
|
||||
prog = '%2.1f%% done, around %s left' % (
|
||||
item['percent_done'],
|
||||
abbreviate_time(time_left),
|
||||
)
|
||||
else:
|
||||
time_left = None
|
||||
prog = '%2.1f%% done' % (item['percent_done'],)
|
||||
else:
|
||||
prog = 'just started'
|
||||
else:
|
||||
prog = ''
|
||||
for verb in ['finished', 'started', 'queued']:
|
||||
keyname = verb + '_at'
|
||||
if keyname in item:
|
||||
when = datetime.fromtimestamp(item[keyname])
|
||||
prog = '%s %s' % (verb, abbreviate_time(now - when))
|
||||
break
|
||||
|
||||
print(" %s: %s" % (paddedname, prog))
|
||||
|
||||
|
||||
def status(options):
|
||||
nodedir = options["node-directory"]
|
||||
stdout, stderr = options.stdout, options.stderr
|
||||
magic_folders = load_magic_folders(os.path.join(options["node-directory"]))
|
||||
|
||||
with open(os.path.join(nodedir, u'private', u'api_auth_token'), 'rb') as f:
|
||||
token = f.read()
|
||||
|
||||
print("Magic-folder status for '{}':".format(options["name"]), file=stdout)
|
||||
|
||||
if options["name"] not in magic_folders:
|
||||
raise Exception(
|
||||
"No such magic-folder '{}'".format(options["name"])
|
||||
)
|
||||
|
||||
dmd_cap = magic_folders[options["name"]]["upload_dircap"]
|
||||
collective_readcap = magic_folders[options["name"]]["collective_dircap"]
|
||||
|
||||
# do *all* our data-retrievals first in case there's an error
|
||||
try:
|
||||
dmd_data = _get_json_for_cap(options, dmd_cap)
|
||||
remote_data = _get_json_for_cap(options, collective_readcap)
|
||||
magic_data = _get_json_for_fragment(
|
||||
options,
|
||||
'magic_folder?t=json',
|
||||
method='POST',
|
||||
post_args=dict(
|
||||
t='json',
|
||||
name=options["name"],
|
||||
token=token,
|
||||
)
|
||||
)
|
||||
except Exception as e:
|
||||
print("failed to retrieve data: %s" % str(e), file=stderr)
|
||||
return 2
|
||||
|
||||
for d in [dmd_data, remote_data, magic_data]:
|
||||
if isinstance(d, dict) and 'error' in d:
|
||||
print("Error from server: %s" % d['error'], file=stderr)
|
||||
print("This means we can't retrieve the remote shared directory.", file=stderr)
|
||||
return 3
|
||||
|
||||
captype, dmd = dmd_data
|
||||
if captype != 'dirnode':
|
||||
print("magic_folder_dircap isn't a directory capability", file=stderr)
|
||||
return 2
|
||||
|
||||
now = datetime.now()
|
||||
|
||||
print("Local files:", file=stdout)
|
||||
for (name, child) in dmd['children'].items():
|
||||
captype, meta = child
|
||||
status = 'good'
|
||||
size = meta['size']
|
||||
created = datetime.fromtimestamp(meta['metadata']['tahoe']['linkcrtime'])
|
||||
version = meta['metadata']['version']
|
||||
nice_size = abbreviate_space(size)
|
||||
nice_created = abbreviate_time(now - created)
|
||||
if captype != 'filenode':
|
||||
print("%20s: error, should be a filecap" % name, file=stdout)
|
||||
continue
|
||||
print(" %s (%s): %s, version=%s, created %s" % (name, nice_size, status, version, nice_created), file=stdout)
|
||||
|
||||
print(file=stdout)
|
||||
print("Remote files:", file=stdout)
|
||||
|
||||
captype, collective = remote_data
|
||||
for (name, data) in collective['children'].items():
|
||||
if data[0] != 'dirnode':
|
||||
print("Error: '%s': expected a dirnode, not '%s'" % (name, data[0]), file=stdout)
|
||||
print(" %s's remote:" % name, file=stdout)
|
||||
dmd = _get_json_for_cap(options, data[1]['ro_uri'])
|
||||
if isinstance(dmd, dict) and 'error' in dmd:
|
||||
print(" Error: could not retrieve directory", file=stdout)
|
||||
continue
|
||||
if dmd[0] != 'dirnode':
|
||||
print("Error: should be a dirnode", file=stdout)
|
||||
continue
|
||||
for (n, d) in dmd[1]['children'].items():
|
||||
if d[0] != 'filenode':
|
||||
print("Error: expected '%s' to be a filenode." % (n,), file=stdout)
|
||||
|
||||
meta = d[1]
|
||||
status = 'good'
|
||||
size = meta['size']
|
||||
created = datetime.fromtimestamp(meta['metadata']['tahoe']['linkcrtime'])
|
||||
version = meta['metadata']['version']
|
||||
nice_size = abbreviate_space(size)
|
||||
nice_created = abbreviate_time(now - created)
|
||||
print(" %s (%s): %s, version=%s, created %s" % (n, nice_size, status, version, nice_created), file=stdout)
|
||||
|
||||
if len(magic_data):
|
||||
uploads = [item for item in magic_data if item['kind'] == 'upload']
|
||||
downloads = [item for item in magic_data if item['kind'] == 'download']
|
||||
longest = max([len(item['path']) for item in magic_data])
|
||||
|
||||
# maybe gate this with --show-completed option or something?
|
||||
uploads = [item for item in uploads if item['status'] != 'success']
|
||||
downloads = [item for item in downloads if item['status'] != 'success']
|
||||
|
||||
if len(uploads):
|
||||
print()
|
||||
print("Uploads:", file=stdout)
|
||||
for item in uploads:
|
||||
_print_item_status(item, now, longest)
|
||||
|
||||
if len(downloads):
|
||||
print()
|
||||
print("Downloads:", file=stdout)
|
||||
for item in downloads:
|
||||
_print_item_status(item, now, longest)
|
||||
|
||||
for item in magic_data:
|
||||
if item['status'] == 'failure':
|
||||
print("Failed:", item, file=stdout)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
class MagicFolderCommand(BaseOptions):
|
||||
subCommands = [
|
||||
["create", None, CreateOptions, "Create a Magic Folder."],
|
||||
["invite", None, InviteOptions, "Invite someone to a Magic Folder."],
|
||||
["join", None, JoinOptions, "Join a Magic Folder."],
|
||||
["leave", None, LeaveOptions, "Leave a Magic Folder."],
|
||||
["status", None, StatusOptions, "Display status of uploads/downloads."],
|
||||
["list", None, ListOptions, "List Magic Folders configured in this client."],
|
||||
]
|
||||
optFlags = [
|
||||
["debug", "d", "Print full stack-traces"],
|
||||
]
|
||||
description = (
|
||||
"A magic-folder has an owner who controls the writecap "
|
||||
"containing a list of nicknames and readcaps. The owner can invite "
|
||||
"new participants. Every participant has the writecap for their "
|
||||
"own folder (the corresponding readcap is in the master folder). "
|
||||
"All clients download files from all other participants using the "
|
||||
"readcaps contained in the master magic-folder directory."
|
||||
)
|
||||
|
||||
def postOptions(self):
|
||||
if not hasattr(self, 'subOptions'):
|
||||
raise usage.UsageError("must specify a subcommand")
|
||||
def getSynopsis(self):
|
||||
return "Usage: tahoe [global-options] magic-folder"
|
||||
def getUsage(self, width=None):
|
||||
t = BaseOptions.getUsage(self, width)
|
||||
t += (
|
||||
"Please run e.g. 'tahoe magic-folder create --help' for more "
|
||||
"details on each subcommand.\n"
|
||||
)
|
||||
return t
|
||||
|
||||
subDispatch = {
|
||||
"create": create,
|
||||
"invite": invite,
|
||||
"join": join,
|
||||
"leave": leave,
|
||||
"status": status,
|
||||
"list": list_,
|
||||
}
|
||||
|
||||
def do_magic_folder(options):
|
||||
so = options.subOptions
|
||||
so.stdout = options.stdout
|
||||
so.stderr = options.stderr
|
||||
f = subDispatch[options.subCommand]
|
||||
try:
|
||||
return f(so)
|
||||
except Exception as e:
|
||||
print("Error: %s" % (e,), file=options.stderr)
|
||||
if options['debug']:
|
||||
raise
|
||||
|
||||
subCommands = [
|
||||
["magic-folder", None, MagicFolderCommand,
|
||||
"Magic Folder subcommands: use 'tahoe magic-folder' for a list."],
|
||||
]
|
||||
|
||||
dispatch = {
|
||||
"magic-folder": do_magic_folder,
|
||||
}
|
@ -9,7 +9,7 @@ from twisted.internet import defer, task, threads
|
||||
from allmydata.version_checks import get_package_versions_string
|
||||
from allmydata.scripts.common import get_default_nodedir
|
||||
from allmydata.scripts import debug, create_node, cli, \
|
||||
stats_gatherer, admin, magic_folder_cli, tahoe_daemonize, tahoe_start, \
|
||||
stats_gatherer, admin, tahoe_daemonize, tahoe_start, \
|
||||
tahoe_stop, tahoe_restart, tahoe_run, tahoe_invite
|
||||
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
|
||||
from allmydata.util.eliotutil import (
|
||||
@ -61,7 +61,6 @@ class Options(usage.Options):
|
||||
+ process_control_commands
|
||||
+ debug.subCommands
|
||||
+ cli.subCommands
|
||||
+ magic_folder_cli.subCommands
|
||||
+ tahoe_invite.subCommands
|
||||
)
|
||||
|
||||
@ -154,10 +153,6 @@ def dispatch(config,
|
||||
# these are blocking, and must be run in a thread
|
||||
f0 = cli.dispatch[command]
|
||||
f = lambda so: threads.deferToThread(f0, so)
|
||||
elif command in magic_folder_cli.dispatch:
|
||||
# same
|
||||
f0 = magic_folder_cli.dispatch[command]
|
||||
f = lambda so: threads.deferToThread(f0, so)
|
||||
elif command in tahoe_invite.dispatch:
|
||||
f = tahoe_invite.dispatch[command]
|
||||
else:
|
||||
|
@ -1,3 +1,24 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Tahoe-LAFS -- secure, distributed storage grid
|
||||
#
|
||||
# Copyright © 2020 The Tahoe-LAFS Software Foundation
|
||||
#
|
||||
# This file is part of Tahoe-LAFS.
|
||||
#
|
||||
# See the docs/about.rst file for licensing information.
|
||||
|
||||
"""
|
||||
Some setup that should apply across the entire test suite.
|
||||
|
||||
Rather than defining interesting APIs for other code to use, this just causes
|
||||
some side-effects which make things better when the test suite runs.
|
||||
"""
|
||||
|
||||
from traceback import extract_stack, format_list
|
||||
from foolscap.pb import Listener
|
||||
from twisted.python.log import err
|
||||
from twisted.application import service
|
||||
|
||||
|
||||
from foolscap.logging.incident import IncidentQualifier
|
||||
class NonQualifier(IncidentQualifier, object):
|
||||
@ -52,6 +73,39 @@ def _configure_hypothesis():
|
||||
settings.load_profile(profile_name)
|
||||
_configure_hypothesis()
|
||||
|
||||
def logging_for_pb_listener():
|
||||
"""
|
||||
Make Foolscap listen error reports include Listener creation stack
|
||||
information.
|
||||
"""
|
||||
original__init__ = Listener.__init__
|
||||
def _listener__init__(self, *a, **kw):
|
||||
original__init__(self, *a, **kw)
|
||||
# Capture the stack here, where Listener is instantiated. This is
|
||||
# likely to explain what code is responsible for this Listener, useful
|
||||
# information to have when the Listener eventually fails to listen.
|
||||
self._creation_stack = extract_stack()
|
||||
|
||||
# Override the Foolscap implementation with one that has an errback
|
||||
def _listener_startService(self):
|
||||
service.Service.startService(self)
|
||||
d = self._ep.listen(self)
|
||||
def _listening(lp):
|
||||
self._lp = lp
|
||||
d.addCallbacks(
|
||||
_listening,
|
||||
# Make sure that this listen failure is reported promptly and with
|
||||
# the creation stack.
|
||||
err,
|
||||
errbackArgs=(
|
||||
"Listener created at {}".format(
|
||||
"".join(format_list(self._creation_stack)),
|
||||
),
|
||||
),
|
||||
)
|
||||
Listener.__init__ = _listener__init__
|
||||
Listener.startService = _listener_startService
|
||||
logging_for_pb_listener()
|
||||
|
||||
import sys
|
||||
if sys.platform == "win32":
|
||||
|
@ -1,814 +0,0 @@
|
||||
import json
|
||||
import shutil
|
||||
import os.path
|
||||
import mock
|
||||
import re
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
from eliot import (
|
||||
log_call,
|
||||
start_action,
|
||||
)
|
||||
from eliot.twisted import (
|
||||
DeferredContext,
|
||||
)
|
||||
|
||||
from twisted.trial import unittest
|
||||
from twisted.internet import defer
|
||||
from twisted.internet import reactor
|
||||
from twisted.python import usage
|
||||
|
||||
from allmydata.util.assertutil import precondition
|
||||
from allmydata.util import fileutil
|
||||
from allmydata.scripts.common import get_aliases
|
||||
from ..no_network import GridTestMixin
|
||||
from ..common_util import parse_cli
|
||||
from .common import CLITestMixin
|
||||
from allmydata.test.common_util import NonASCIIPathMixin
|
||||
from allmydata.scripts import magic_folder_cli
|
||||
from allmydata.util.fileutil import abspath_expanduser_unicode
|
||||
from allmydata.util.encodingutil import unicode_to_argv
|
||||
from allmydata.frontends.magic_folder import MagicFolder
|
||||
from allmydata import uri
|
||||
from ...util.eliotutil import (
|
||||
log_call_deferred,
|
||||
)
|
||||
|
||||
class MagicFolderCLITestMixin(CLITestMixin, GridTestMixin, NonASCIIPathMixin):
|
||||
def setUp(self):
|
||||
GridTestMixin.setUp(self)
|
||||
self.alice_nickname = self.unicode_or_fallback(u"Alice\u00F8", u"Alice", io_as_well=True)
|
||||
self.bob_nickname = self.unicode_or_fallback(u"Bob\u00F8", u"Bob", io_as_well=True)
|
||||
|
||||
def do_create_magic_folder(self, client_num):
|
||||
with start_action(action_type=u"create-magic-folder", client_num=client_num).context():
|
||||
d = DeferredContext(
|
||||
self.do_cli(
|
||||
"magic-folder", "--debug", "create", "magic:",
|
||||
client_num=client_num,
|
||||
)
|
||||
)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failUnlessEqual(rc, 0, stdout + stderr)
|
||||
self.assertIn("Alias 'magic' created", stdout)
|
||||
# self.failUnlessIn("joined new magic-folder", stdout)
|
||||
# self.failUnlessIn("Successfully created magic-folder", stdout)
|
||||
self.failUnlessEqual(stderr, "")
|
||||
aliases = get_aliases(self.get_clientdir(i=client_num))
|
||||
self.assertIn("magic", aliases)
|
||||
self.failUnless(aliases["magic"].startswith("URI:DIR2:"))
|
||||
d.addCallback(_done)
|
||||
return d.addActionFinish()
|
||||
|
||||
def do_invite(self, client_num, nickname):
|
||||
nickname_arg = unicode_to_argv(nickname)
|
||||
action = start_action(
|
||||
action_type=u"invite-to-magic-folder",
|
||||
client_num=client_num,
|
||||
nickname=nickname,
|
||||
)
|
||||
with action.context():
|
||||
d = DeferredContext(
|
||||
self.do_cli(
|
||||
"magic-folder",
|
||||
"invite",
|
||||
"magic:",
|
||||
nickname_arg,
|
||||
client_num=client_num,
|
||||
)
|
||||
)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failUnlessEqual(rc, 0)
|
||||
return (rc, stdout, stderr)
|
||||
d.addCallback(_done)
|
||||
return d.addActionFinish()
|
||||
|
||||
def do_list(self, client_num, json=False):
|
||||
args = ("magic-folder", "list",)
|
||||
if json:
|
||||
args = args + ("--json",)
|
||||
d = self.do_cli(*args, client_num=client_num)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
return (rc, stdout, stderr)
|
||||
d.addCallback(_done)
|
||||
return d
|
||||
|
||||
def do_status(self, client_num, name=None):
|
||||
args = ("magic-folder", "status",)
|
||||
if name is not None:
|
||||
args = args + ("--name", name)
|
||||
d = self.do_cli(*args, client_num=client_num)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
return (rc, stdout, stderr)
|
||||
d.addCallback(_done)
|
||||
return d
|
||||
|
||||
def do_join(self, client_num, local_dir, invite_code):
|
||||
action = start_action(
|
||||
action_type=u"join-magic-folder",
|
||||
client_num=client_num,
|
||||
local_dir=local_dir,
|
||||
invite_code=invite_code,
|
||||
)
|
||||
with action.context():
|
||||
precondition(isinstance(local_dir, unicode), local_dir=local_dir)
|
||||
precondition(isinstance(invite_code, str), invite_code=invite_code)
|
||||
local_dir_arg = unicode_to_argv(local_dir)
|
||||
d = DeferredContext(
|
||||
self.do_cli(
|
||||
"magic-folder",
|
||||
"join",
|
||||
invite_code,
|
||||
local_dir_arg,
|
||||
client_num=client_num,
|
||||
)
|
||||
)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failUnlessEqual(rc, 0)
|
||||
self.failUnlessEqual(stdout, "")
|
||||
self.failUnlessEqual(stderr, "")
|
||||
return (rc, stdout, stderr)
|
||||
d.addCallback(_done)
|
||||
return d.addActionFinish()
|
||||
|
||||
def do_leave(self, client_num):
|
||||
d = self.do_cli("magic-folder", "leave", client_num=client_num)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failUnlessEqual(rc, 0)
|
||||
return (rc, stdout, stderr)
|
||||
d.addCallback(_done)
|
||||
return d
|
||||
|
||||
def check_joined_config(self, client_num, upload_dircap):
|
||||
"""Tests that our collective directory has the readonly cap of
|
||||
our upload directory.
|
||||
"""
|
||||
action = start_action(action_type=u"check-joined-config")
|
||||
with action.context():
|
||||
collective_readonly_cap = self.get_caps_from_files(client_num)[0]
|
||||
d = DeferredContext(
|
||||
self.do_cli(
|
||||
"ls", "--json",
|
||||
collective_readonly_cap,
|
||||
client_num=client_num,
|
||||
)
|
||||
)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failUnlessEqual(rc, 0)
|
||||
return (rc, stdout, stderr)
|
||||
d.addCallback(_done)
|
||||
def test_joined_magic_folder(args):
|
||||
(rc, stdout, stderr) = args
|
||||
readonly_cap = unicode(uri.from_string(upload_dircap).get_readonly().to_string(), 'utf-8')
|
||||
s = re.search(readonly_cap, stdout)
|
||||
self.failUnless(s is not None)
|
||||
return None
|
||||
d.addCallback(test_joined_magic_folder)
|
||||
return d.addActionFinish()
|
||||
|
||||
def get_caps_from_files(self, client_num):
|
||||
from allmydata.frontends.magic_folder import load_magic_folders
|
||||
folders = load_magic_folders(self.get_clientdir(i=client_num))
|
||||
mf = folders["default"]
|
||||
return mf['collective_dircap'], mf['upload_dircap']
|
||||
|
||||
@log_call
|
||||
def check_config(self, client_num, local_dir):
|
||||
client_config = fileutil.read(os.path.join(self.get_clientdir(i=client_num), "tahoe.cfg"))
|
||||
mf_yaml = fileutil.read(os.path.join(self.get_clientdir(i=client_num), "private", "magic_folders.yaml"))
|
||||
local_dir_utf8 = local_dir.encode('utf-8')
|
||||
magic_folder_config = "[magic_folder]\nenabled = True"
|
||||
self.assertIn(magic_folder_config, client_config)
|
||||
self.assertIn(local_dir_utf8, mf_yaml)
|
||||
|
||||
def create_invite_join_magic_folder(self, nickname, local_dir):
|
||||
nickname_arg = unicode_to_argv(nickname)
|
||||
local_dir_arg = unicode_to_argv(local_dir)
|
||||
# the --debug means we get real exceptions on failures
|
||||
d = self.do_cli("magic-folder", "--debug", "create", "magic:", nickname_arg, local_dir_arg)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failUnlessEqual(rc, 0, stdout + stderr)
|
||||
|
||||
client = self.get_client()
|
||||
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
|
||||
self.collective_dirnode = client.create_node_from_uri(self.collective_dircap)
|
||||
self.upload_dirnode = client.create_node_from_uri(self.upload_dircap)
|
||||
d.addCallback(_done)
|
||||
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
|
||||
d.addCallback(lambda ign: self.check_config(0, local_dir))
|
||||
return d
|
||||
|
||||
# XXX should probably just be "tearDown"...
|
||||
@log_call_deferred(action_type=u"test:cli:magic-folder:cleanup")
|
||||
def cleanup(self, res):
|
||||
d = DeferredContext(defer.succeed(None))
|
||||
def _clean(ign):
|
||||
return self.magicfolder.disownServiceParent()
|
||||
|
||||
d.addCallback(_clean)
|
||||
d.addCallback(lambda ign: res)
|
||||
return d.result
|
||||
|
||||
def init_magicfolder(self, client_num, upload_dircap, collective_dircap, local_magic_dir, clock):
|
||||
dbfile = abspath_expanduser_unicode(u"magicfolder_default.sqlite", base=self.get_clientdir(i=client_num))
|
||||
magicfolder = MagicFolder(
|
||||
client=self.get_client(client_num),
|
||||
upload_dircap=upload_dircap,
|
||||
collective_dircap=collective_dircap,
|
||||
local_path_u=local_magic_dir,
|
||||
dbfile=dbfile,
|
||||
umask=0o077,
|
||||
name='default',
|
||||
clock=clock,
|
||||
uploader_delay=0.2,
|
||||
downloader_delay=0,
|
||||
)
|
||||
|
||||
magicfolder.setServiceParent(self.get_client(client_num))
|
||||
magicfolder.ready()
|
||||
return magicfolder
|
||||
|
||||
def setup_alice_and_bob(self, alice_clock=reactor, bob_clock=reactor):
|
||||
self.set_up_grid(num_clients=2, oneshare=True)
|
||||
|
||||
self.alice_magicfolder = None
|
||||
self.bob_magicfolder = None
|
||||
|
||||
alice_magic_dir = abspath_expanduser_unicode(u"Alice-magic", base=self.basedir)
|
||||
self.mkdir_nonascii(alice_magic_dir)
|
||||
bob_magic_dir = abspath_expanduser_unicode(u"Bob-magic", base=self.basedir)
|
||||
self.mkdir_nonascii(bob_magic_dir)
|
||||
|
||||
# Alice creates a Magic Folder, invites herself and joins.
|
||||
d = self.do_create_magic_folder(0)
|
||||
d.addCallback(lambda ign: self.do_invite(0, self.alice_nickname))
|
||||
def get_invite_code(result):
|
||||
self.invite_code = result[1].strip()
|
||||
d.addCallback(get_invite_code)
|
||||
d.addCallback(lambda ign: self.do_join(0, alice_magic_dir, self.invite_code))
|
||||
def get_alice_caps(ign):
|
||||
self.alice_collective_dircap, self.alice_upload_dircap = self.get_caps_from_files(0)
|
||||
d.addCallback(get_alice_caps)
|
||||
d.addCallback(lambda ign: self.check_joined_config(0, self.alice_upload_dircap))
|
||||
d.addCallback(lambda ign: self.check_config(0, alice_magic_dir))
|
||||
def get_Alice_magicfolder(result):
|
||||
self.alice_magicfolder = self.init_magicfolder(0, self.alice_upload_dircap,
|
||||
self.alice_collective_dircap,
|
||||
alice_magic_dir, alice_clock)
|
||||
return result
|
||||
d.addCallback(get_Alice_magicfolder)
|
||||
|
||||
# Alice invites Bob. Bob joins.
|
||||
d.addCallback(lambda ign: self.do_invite(0, self.bob_nickname))
|
||||
def get_invite_code(result):
|
||||
self.invite_code = result[1].strip()
|
||||
d.addCallback(get_invite_code)
|
||||
d.addCallback(lambda ign: self.do_join(1, bob_magic_dir, self.invite_code))
|
||||
def get_bob_caps(ign):
|
||||
self.bob_collective_dircap, self.bob_upload_dircap = self.get_caps_from_files(1)
|
||||
d.addCallback(get_bob_caps)
|
||||
d.addCallback(lambda ign: self.check_joined_config(1, self.bob_upload_dircap))
|
||||
d.addCallback(lambda ign: self.check_config(1, bob_magic_dir))
|
||||
def get_Bob_magicfolder(result):
|
||||
self.bob_magicfolder = self.init_magicfolder(1, self.bob_upload_dircap,
|
||||
self.bob_collective_dircap,
|
||||
bob_magic_dir, bob_clock)
|
||||
return result
|
||||
d.addCallback(get_Bob_magicfolder)
|
||||
return d
|
||||
|
||||
|
||||
class ListMagicFolder(MagicFolderCLITestMixin, unittest.TestCase):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def setUp(self):
|
||||
yield super(ListMagicFolder, self).setUp()
|
||||
self.basedir="mf_list"
|
||||
self.set_up_grid(oneshare=True)
|
||||
self.local_dir = os.path.join(self.basedir, "magic")
|
||||
os.mkdir(self.local_dir)
|
||||
self.abs_local_dir_u = abspath_expanduser_unicode(unicode(self.local_dir), long_path=False)
|
||||
|
||||
yield self.do_create_magic_folder(0)
|
||||
(rc, stdout, stderr) = yield self.do_invite(0, self.alice_nickname)
|
||||
invite_code = stdout.strip()
|
||||
yield self.do_join(0, unicode(self.local_dir), invite_code)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def tearDown(self):
|
||||
yield super(ListMagicFolder, self).tearDown()
|
||||
shutil.rmtree(self.basedir)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_list(self):
|
||||
rc, stdout, stderr = yield self.do_list(0)
|
||||
self.failUnlessEqual(rc, 0)
|
||||
self.assertIn("default:", stdout)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_list_none(self):
|
||||
yield self.do_leave(0)
|
||||
rc, stdout, stderr = yield self.do_list(0)
|
||||
self.failUnlessEqual(rc, 0)
|
||||
self.assertIn("No magic-folders", stdout)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_list_json(self):
|
||||
rc, stdout, stderr = yield self.do_list(0, json=True)
|
||||
self.failUnlessEqual(rc, 0)
|
||||
res = json.loads(stdout)
|
||||
self.assertEqual(
|
||||
dict(default=dict(directory=self.abs_local_dir_u)),
|
||||
res,
|
||||
)
|
||||
|
||||
|
||||
class StatusMagicFolder(MagicFolderCLITestMixin, unittest.TestCase):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def setUp(self):
|
||||
yield super(StatusMagicFolder, self).setUp()
|
||||
self.basedir="mf_list"
|
||||
self.set_up_grid(oneshare=True)
|
||||
self.local_dir = os.path.join(self.basedir, "magic")
|
||||
os.mkdir(self.local_dir)
|
||||
self.abs_local_dir_u = abspath_expanduser_unicode(unicode(self.local_dir), long_path=False)
|
||||
|
||||
yield self.do_create_magic_folder(0)
|
||||
(rc, stdout, stderr) = yield self.do_invite(0, self.alice_nickname)
|
||||
invite_code = stdout.strip()
|
||||
yield self.do_join(0, unicode(self.local_dir), invite_code)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def tearDown(self):
|
||||
yield super(StatusMagicFolder, self).tearDown()
|
||||
shutil.rmtree(self.basedir)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_status(self):
|
||||
now = datetime.now()
|
||||
then = now.replace(year=now.year - 5)
|
||||
five_year_interval = (now - then).total_seconds()
|
||||
|
||||
def json_for_cap(options, cap):
|
||||
if cap.startswith('URI:DIR2:'):
|
||||
return (
|
||||
'dirnode',
|
||||
{
|
||||
"children": {
|
||||
"foo": ('filenode', {
|
||||
"size": 1234,
|
||||
"metadata": {
|
||||
"tahoe": {
|
||||
"linkcrtime": (time.time() - five_year_interval),
|
||||
},
|
||||
"version": 1,
|
||||
},
|
||||
"ro_uri": "read-only URI",
|
||||
})
|
||||
}
|
||||
}
|
||||
)
|
||||
else:
|
||||
return ('dirnode', {"children": {}})
|
||||
jc = mock.patch(
|
||||
"allmydata.scripts.magic_folder_cli._get_json_for_cap",
|
||||
side_effect=json_for_cap,
|
||||
)
|
||||
|
||||
def json_for_frag(options, fragment, method='GET', post_args=None):
|
||||
return {}
|
||||
jf = mock.patch(
|
||||
"allmydata.scripts.magic_folder_cli._get_json_for_fragment",
|
||||
side_effect=json_for_frag,
|
||||
)
|
||||
|
||||
with jc, jf:
|
||||
rc, stdout, stderr = yield self.do_status(0)
|
||||
self.failUnlessEqual(rc, 0)
|
||||
self.assertIn("default", stdout)
|
||||
|
||||
self.assertIn(
|
||||
"foo (1.23 kB): good, version=1, created 5 years ago",
|
||||
stdout,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_status_child_not_dirnode(self):
|
||||
def json_for_cap(options, cap):
|
||||
if cap.startswith('URI:DIR2'):
|
||||
return (
|
||||
'dirnode',
|
||||
{
|
||||
"children": {
|
||||
"foo": ('filenode', {
|
||||
"size": 1234,
|
||||
"metadata": {
|
||||
"tahoe": {
|
||||
"linkcrtime": 0.0,
|
||||
},
|
||||
"version": 1,
|
||||
},
|
||||
"ro_uri": "read-only URI",
|
||||
})
|
||||
}
|
||||
}
|
||||
)
|
||||
elif cap == "read-only URI":
|
||||
return {
|
||||
"error": "bad stuff",
|
||||
}
|
||||
else:
|
||||
return ('dirnode', {"children": {}})
|
||||
jc = mock.patch(
|
||||
"allmydata.scripts.magic_folder_cli._get_json_for_cap",
|
||||
side_effect=json_for_cap,
|
||||
)
|
||||
|
||||
def json_for_frag(options, fragment, method='GET', post_args=None):
|
||||
return {}
|
||||
jf = mock.patch(
|
||||
"allmydata.scripts.magic_folder_cli._get_json_for_fragment",
|
||||
side_effect=json_for_frag,
|
||||
)
|
||||
|
||||
with jc, jf:
|
||||
rc, stdout, stderr = yield self.do_status(0)
|
||||
self.failUnlessEqual(rc, 0)
|
||||
|
||||
self.assertIn(
|
||||
"expected a dirnode",
|
||||
stdout + stderr,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_status_error_not_dircap(self):
|
||||
def json_for_cap(options, cap):
|
||||
if cap.startswith('URI:DIR2:'):
|
||||
return (
|
||||
'filenode',
|
||||
{}
|
||||
)
|
||||
else:
|
||||
return ('dirnode', {"children": {}})
|
||||
jc = mock.patch(
|
||||
"allmydata.scripts.magic_folder_cli._get_json_for_cap",
|
||||
side_effect=json_for_cap,
|
||||
)
|
||||
|
||||
def json_for_frag(options, fragment, method='GET', post_args=None):
|
||||
return {}
|
||||
jf = mock.patch(
|
||||
"allmydata.scripts.magic_folder_cli._get_json_for_fragment",
|
||||
side_effect=json_for_frag,
|
||||
)
|
||||
|
||||
with jc, jf:
|
||||
rc, stdout, stderr = yield self.do_status(0)
|
||||
self.failUnlessEqual(rc, 2)
|
||||
self.assertIn(
|
||||
"magic_folder_dircap isn't a directory capability",
|
||||
stdout + stderr,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_status_nothing(self):
|
||||
rc, stdout, stderr = yield self.do_status(0, name="blam")
|
||||
self.assertIn("No such magic-folder 'blam'", stderr)
|
||||
|
||||
|
||||
class CreateMagicFolder(MagicFolderCLITestMixin, unittest.TestCase):
|
||||
def test_create_and_then_invite_join(self):
|
||||
self.basedir = "cli/MagicFolder/create-and-then-invite-join"
|
||||
self.set_up_grid(oneshare=True)
|
||||
local_dir = os.path.join(self.basedir, "magic")
|
||||
os.mkdir(local_dir)
|
||||
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
|
||||
|
||||
d = self.do_create_magic_folder(0)
|
||||
d.addCallback(lambda ign: self.do_invite(0, self.alice_nickname))
|
||||
def get_invite_code_and_join(args):
|
||||
(rc, stdout, stderr) = args
|
||||
invite_code = stdout.strip()
|
||||
return self.do_join(0, unicode(local_dir), invite_code)
|
||||
d.addCallback(get_invite_code_and_join)
|
||||
def get_caps(ign):
|
||||
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
|
||||
d.addCallback(get_caps)
|
||||
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
|
||||
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
|
||||
return d
|
||||
|
||||
def test_create_error(self):
|
||||
self.basedir = "cli/MagicFolder/create-error"
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
d = self.do_cli("magic-folder", "create", "m a g i c:", client_num=0)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failIfEqual(rc, 0)
|
||||
self.failUnlessIn("Alias names cannot contain spaces.", stderr)
|
||||
d.addCallback(_done)
|
||||
return d
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_create_duplicate_name(self):
|
||||
self.basedir = "cli/MagicFolder/create-dup"
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"magic-folder", "create", "magic:", "--name", "foo",
|
||||
client_num=0,
|
||||
)
|
||||
self.assertEqual(rc, 0)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"magic-folder", "create", "magic:", "--name", "foo",
|
||||
client_num=0,
|
||||
)
|
||||
self.assertEqual(rc, 1)
|
||||
self.assertIn(
|
||||
"Already have a magic-folder named 'default'",
|
||||
stderr
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_leave_wrong_folder(self):
|
||||
self.basedir = "cli/MagicFolder/leave_wrong_folders"
|
||||
yield self.set_up_grid(oneshare=True)
|
||||
magic_dir = os.path.join(self.basedir, 'magic')
|
||||
os.mkdir(magic_dir)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"magic-folder", "create", "--name", "foo", "magic:", "my_name", magic_dir,
|
||||
client_num=0,
|
||||
)
|
||||
self.assertEqual(rc, 0)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"magic-folder", "leave", "--name", "bar",
|
||||
client_num=0,
|
||||
)
|
||||
self.assertNotEqual(rc, 0)
|
||||
self.assertIn(
|
||||
"No such magic-folder 'bar'",
|
||||
stdout + stderr,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_leave_no_folder(self):
|
||||
self.basedir = "cli/MagicFolder/leave_no_folders"
|
||||
yield self.set_up_grid(oneshare=True)
|
||||
magic_dir = os.path.join(self.basedir, 'magic')
|
||||
os.mkdir(magic_dir)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"magic-folder", "create", "--name", "foo", "magic:", "my_name", magic_dir,
|
||||
client_num=0,
|
||||
)
|
||||
self.assertEqual(rc, 0)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"magic-folder", "leave", "--name", "foo",
|
||||
client_num=0,
|
||||
)
|
||||
self.assertEqual(rc, 0)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"magic-folder", "leave", "--name", "foo",
|
||||
client_num=0,
|
||||
)
|
||||
self.assertEqual(rc, 1)
|
||||
self.assertIn(
|
||||
"No magic-folders at all",
|
||||
stderr,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_leave_no_folders_at_all(self):
|
||||
self.basedir = "cli/MagicFolder/leave_no_folders_at_all"
|
||||
yield self.set_up_grid(oneshare=True)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"magic-folder", "leave",
|
||||
client_num=0,
|
||||
)
|
||||
self.assertEqual(rc, 1)
|
||||
self.assertIn(
|
||||
"No magic-folders at all",
|
||||
stderr,
|
||||
)
|
||||
|
||||
def test_create_invite_join(self):
|
||||
self.basedir = "cli/MagicFolder/create-invite-join"
|
||||
self.set_up_grid(oneshare=True)
|
||||
local_dir = os.path.join(self.basedir, "magic")
|
||||
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
|
||||
|
||||
d = self.do_cli("magic-folder", "create", "magic:", "Alice", local_dir)
|
||||
def _done(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failUnlessEqual(rc, 0)
|
||||
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
|
||||
d.addCallback(_done)
|
||||
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
|
||||
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
|
||||
return d
|
||||
|
||||
def test_help_synopsis(self):
|
||||
self.basedir = "cli/MagicFolder/help_synopsis"
|
||||
os.makedirs(self.basedir)
|
||||
|
||||
o = magic_folder_cli.CreateOptions()
|
||||
o.parent = magic_folder_cli.MagicFolderCommand()
|
||||
o.parent.getSynopsis()
|
||||
|
||||
def test_create_invite_join_failure(self):
|
||||
self.basedir = "cli/MagicFolder/create-invite-join-failure"
|
||||
os.makedirs(self.basedir)
|
||||
|
||||
o = magic_folder_cli.CreateOptions()
|
||||
o.parent = magic_folder_cli.MagicFolderCommand()
|
||||
o.parent['node-directory'] = self.basedir
|
||||
try:
|
||||
o.parseArgs("magic:", "Alice", "-foo")
|
||||
except usage.UsageError as e:
|
||||
self.failUnlessIn("cannot start with '-'", str(e))
|
||||
else:
|
||||
self.fail("expected UsageError")
|
||||
|
||||
def test_join_failure(self):
|
||||
self.basedir = "cli/MagicFolder/create-join-failure"
|
||||
os.makedirs(self.basedir)
|
||||
|
||||
o = magic_folder_cli.JoinOptions()
|
||||
o.parent = magic_folder_cli.MagicFolderCommand()
|
||||
o.parent['node-directory'] = self.basedir
|
||||
try:
|
||||
o.parseArgs("URI:invite+URI:code", "-foo")
|
||||
except usage.UsageError as e:
|
||||
self.failUnlessIn("cannot start with '-'", str(e))
|
||||
else:
|
||||
self.fail("expected UsageError")
|
||||
|
||||
def test_join_twice_failure(self):
|
||||
self.basedir = "cli/MagicFolder/create-join-twice-failure"
|
||||
os.makedirs(self.basedir)
|
||||
self.set_up_grid(oneshare=True)
|
||||
local_dir = os.path.join(self.basedir, "magic")
|
||||
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
|
||||
|
||||
d = self.do_create_magic_folder(0)
|
||||
d.addCallback(lambda ign: self.do_invite(0, self.alice_nickname))
|
||||
def get_invite_code_and_join(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.invite_code = stdout.strip()
|
||||
return self.do_join(0, unicode(local_dir), self.invite_code)
|
||||
d.addCallback(get_invite_code_and_join)
|
||||
def get_caps(ign):
|
||||
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
|
||||
d.addCallback(get_caps)
|
||||
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
|
||||
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
|
||||
def join_again(ignore):
|
||||
return self.do_cli("magic-folder", "join", self.invite_code, local_dir, client_num=0)
|
||||
d.addCallback(join_again)
|
||||
def get_results(result):
|
||||
(rc, out, err) = result
|
||||
self.failUnlessEqual(out, "")
|
||||
self.failUnlessIn("This client already has a magic-folder", err)
|
||||
self.failIfEqual(rc, 0)
|
||||
d.addCallback(get_results)
|
||||
return d
|
||||
|
||||
def test_join_leave_join(self):
|
||||
self.basedir = "cli/MagicFolder/create-join-leave-join"
|
||||
os.makedirs(self.basedir)
|
||||
self.set_up_grid(oneshare=True)
|
||||
local_dir = os.path.join(self.basedir, "magic")
|
||||
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
|
||||
|
||||
self.invite_code = None
|
||||
d = self.do_create_magic_folder(0)
|
||||
d.addCallback(lambda ign: self.do_invite(0, self.alice_nickname))
|
||||
def get_invite_code_and_join(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failUnlessEqual(rc, 0)
|
||||
self.invite_code = stdout.strip()
|
||||
return self.do_join(0, unicode(local_dir), self.invite_code)
|
||||
d.addCallback(get_invite_code_and_join)
|
||||
def get_caps(ign):
|
||||
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
|
||||
d.addCallback(get_caps)
|
||||
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
|
||||
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
|
||||
d.addCallback(lambda ign: self.do_leave(0))
|
||||
|
||||
d.addCallback(lambda ign: self.do_join(0, unicode(local_dir), self.invite_code))
|
||||
def get_caps(ign):
|
||||
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
|
||||
d.addCallback(get_caps)
|
||||
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
|
||||
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
|
||||
|
||||
return d
|
||||
|
||||
def test_join_failures(self):
|
||||
self.basedir = "cli/MagicFolder/create-join-failures"
|
||||
os.makedirs(self.basedir)
|
||||
self.set_up_grid(oneshare=True)
|
||||
local_dir = os.path.join(self.basedir, "magic")
|
||||
os.mkdir(local_dir)
|
||||
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
|
||||
|
||||
self.invite_code = None
|
||||
d = self.do_create_magic_folder(0)
|
||||
d.addCallback(lambda ign: self.do_invite(0, self.alice_nickname))
|
||||
def get_invite_code_and_join(args):
|
||||
(rc, stdout, stderr) = args
|
||||
self.failUnlessEqual(rc, 0)
|
||||
self.invite_code = stdout.strip()
|
||||
return self.do_join(0, unicode(local_dir), self.invite_code)
|
||||
d.addCallback(get_invite_code_and_join)
|
||||
def get_caps(ign):
|
||||
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
|
||||
d.addCallback(get_caps)
|
||||
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
|
||||
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
|
||||
|
||||
def check_success(result):
|
||||
(rc, out, err) = result
|
||||
self.failUnlessEqual(rc, 0, out + err)
|
||||
def check_failure(result):
|
||||
(rc, out, err) = result
|
||||
self.failIfEqual(rc, 0)
|
||||
|
||||
def leave(ign):
|
||||
return self.do_cli("magic-folder", "leave", client_num=0)
|
||||
d.addCallback(leave)
|
||||
d.addCallback(check_success)
|
||||
|
||||
magic_folder_db_file = os.path.join(self.get_clientdir(i=0), u"private", u"magicfolder_default.sqlite")
|
||||
|
||||
def check_join_if_file(my_file):
|
||||
fileutil.write(my_file, "my file data")
|
||||
d2 = self.do_cli("magic-folder", "join", self.invite_code, local_dir, client_num=0)
|
||||
d2.addCallback(check_failure)
|
||||
return d2
|
||||
|
||||
for my_file in [magic_folder_db_file]:
|
||||
d.addCallback(lambda ign, my_file: check_join_if_file(my_file), my_file)
|
||||
d.addCallback(leave)
|
||||
# we didn't successfully join, so leaving should be an error
|
||||
d.addCallback(check_failure)
|
||||
|
||||
return d
|
||||
|
||||
class CreateErrors(unittest.TestCase):
|
||||
def test_poll_interval(self):
|
||||
e = self.assertRaises(usage.UsageError, parse_cli,
|
||||
"magic-folder", "create", "--poll-interval=frog",
|
||||
"alias:")
|
||||
self.assertEqual(str(e), "--poll-interval must be a positive integer")
|
||||
|
||||
e = self.assertRaises(usage.UsageError, parse_cli,
|
||||
"magic-folder", "create", "--poll-interval=-4",
|
||||
"alias:")
|
||||
self.assertEqual(str(e), "--poll-interval must be a positive integer")
|
||||
|
||||
def test_alias(self):
|
||||
e = self.assertRaises(usage.UsageError, parse_cli,
|
||||
"magic-folder", "create", "no-colon")
|
||||
self.assertEqual(str(e), "An alias must end with a ':' character.")
|
||||
|
||||
def test_nickname(self):
|
||||
e = self.assertRaises(usage.UsageError, parse_cli,
|
||||
"magic-folder", "create", "alias:", "nickname")
|
||||
self.assertEqual(str(e), "If NICKNAME is specified then LOCAL_DIR must also be specified.")
|
||||
|
||||
class InviteErrors(unittest.TestCase):
|
||||
def test_alias(self):
|
||||
e = self.assertRaises(usage.UsageError, parse_cli,
|
||||
"magic-folder", "invite", "no-colon")
|
||||
self.assertEqual(str(e), "An alias must end with a ':' character.")
|
||||
|
||||
class JoinErrors(unittest.TestCase):
|
||||
def test_poll_interval(self):
|
||||
e = self.assertRaises(usage.UsageError, parse_cli,
|
||||
"magic-folder", "join", "--poll-interval=frog",
|
||||
"code", "localdir")
|
||||
self.assertEqual(str(e), "--poll-interval must be a positive integer")
|
||||
|
||||
e = self.assertRaises(usage.UsageError, parse_cli,
|
||||
"magic-folder", "join", "--poll-interval=-2",
|
||||
"code", "localdir")
|
||||
self.assertEqual(str(e), "--poll-interval must be a positive integer")
|
@ -37,10 +37,9 @@ from testtools.twistedsupport import (
|
||||
)
|
||||
|
||||
import allmydata
|
||||
import allmydata.frontends.magic_folder
|
||||
import allmydata.util.log
|
||||
|
||||
from allmydata.node import OldConfigError, OldConfigOptionError, UnescapedHashError, _Config, create_node_dir
|
||||
from allmydata.node import OldConfigError, UnescapedHashError, _Config, create_node_dir
|
||||
from allmydata.frontends.auth import NeedRootcapLookupScheme
|
||||
from allmydata.version_checks import (
|
||||
get_package_versions_string,
|
||||
@ -658,104 +657,6 @@ class Basic(testutil.ReallyEqualMixin, testutil.NonASCIIPathMixin, unittest.Test
|
||||
yield _check("helper.furl = None", None)
|
||||
yield _check("helper.furl = pb://blah\n", "pb://blah")
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_create_magic_folder_service(self):
|
||||
"""
|
||||
providing magic-folder options actually creates a MagicFolder service
|
||||
"""
|
||||
boom = False
|
||||
class Boom(Exception):
|
||||
pass
|
||||
|
||||
class MockMagicFolder(allmydata.frontends.magic_folder.MagicFolder):
|
||||
name = 'magic-folder'
|
||||
|
||||
def __init__(self, client, upload_dircap, collective_dircap, local_path_u, dbfile, umask, name,
|
||||
inotify=None, uploader_delay=1.0, clock=None, downloader_delay=3):
|
||||
if boom:
|
||||
raise Boom()
|
||||
|
||||
service.MultiService.__init__(self)
|
||||
self.client = client
|
||||
self._umask = umask
|
||||
self.upload_dircap = upload_dircap
|
||||
self.collective_dircap = collective_dircap
|
||||
self.local_dir = local_path_u
|
||||
self.dbfile = dbfile
|
||||
self.inotify = inotify
|
||||
|
||||
def startService(self):
|
||||
self.running = True
|
||||
|
||||
def stopService(self):
|
||||
self.running = False
|
||||
|
||||
def ready(self):
|
||||
pass
|
||||
|
||||
self.patch(allmydata.frontends.magic_folder, 'MagicFolder', MockMagicFolder)
|
||||
|
||||
upload_dircap = "URI:DIR2:blah"
|
||||
local_dir_u = self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir")
|
||||
local_dir_utf8 = local_dir_u.encode('utf-8')
|
||||
config = (BASECONFIG +
|
||||
"[storage]\n" +
|
||||
"enabled = false\n" +
|
||||
"[magic_folder]\n" +
|
||||
"enabled = true\n")
|
||||
|
||||
basedir1 = "test_client.Basic.test_create_magic_folder_service1"
|
||||
os.mkdir(basedir1)
|
||||
os.mkdir(local_dir_u)
|
||||
|
||||
# which config-entry should be missing?
|
||||
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
|
||||
config + "local.directory = " + local_dir_utf8 + "\n")
|
||||
with self.assertRaises(IOError):
|
||||
yield client.create_client(basedir1)
|
||||
|
||||
# local.directory entry missing .. but that won't be an error
|
||||
# now, it'll just assume there are not magic folders
|
||||
# .. hrm...should we make that an error (if enabled=true but
|
||||
# there's not yaml AND no local.directory?)
|
||||
fileutil.write(os.path.join(basedir1, "tahoe.cfg"), config)
|
||||
fileutil.write(os.path.join(basedir1, "private", "magic_folder_dircap"), "URI:DIR2:blah")
|
||||
fileutil.write(os.path.join(basedir1, "private", "collective_dircap"), "URI:DIR2:meow")
|
||||
|
||||
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
|
||||
config.replace("[magic_folder]\n", "[drop_upload]\n"))
|
||||
|
||||
with self.assertRaises(OldConfigOptionError):
|
||||
yield client.create_client(basedir1)
|
||||
|
||||
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
|
||||
config + "local.directory = " + local_dir_utf8 + "\n")
|
||||
c1 = yield client.create_client(basedir1)
|
||||
magicfolder = c1.getServiceNamed('magic-folder')
|
||||
self.failUnless(isinstance(magicfolder, MockMagicFolder), magicfolder)
|
||||
self.failUnlessReallyEqual(magicfolder.client, c1)
|
||||
self.failUnlessReallyEqual(magicfolder.upload_dircap, upload_dircap)
|
||||
self.failUnlessReallyEqual(os.path.basename(magicfolder.local_dir), local_dir_u)
|
||||
self.failUnless(magicfolder.inotify is None, magicfolder.inotify)
|
||||
# It doesn't start until the client starts.
|
||||
self.assertFalse(magicfolder.running)
|
||||
|
||||
# See above.
|
||||
boom = True
|
||||
|
||||
basedir2 = "test_client.Basic.test_create_magic_folder_service2"
|
||||
os.mkdir(basedir2)
|
||||
os.mkdir(os.path.join(basedir2, "private"))
|
||||
fileutil.write(os.path.join(basedir2, "tahoe.cfg"),
|
||||
BASECONFIG +
|
||||
"[magic_folder]\n" +
|
||||
"enabled = true\n" +
|
||||
"local.directory = " + local_dir_utf8 + "\n")
|
||||
fileutil.write(os.path.join(basedir2, "private", "magic_folder_dircap"), "URI:DIR2:blah")
|
||||
fileutil.write(os.path.join(basedir2, "private", "collective_dircap"), "URI:DIR2:meow")
|
||||
with self.assertRaises(Boom):
|
||||
yield client.create_client(basedir2)
|
||||
|
||||
|
||||
def flush_but_dont_ignore(res):
|
||||
d = flushEventualQueue()
|
||||
|
@ -1,171 +0,0 @@
|
||||
# Copyright (c) Twisted Matrix Laboratories.
|
||||
# See LICENSE for details.
|
||||
|
||||
"""
|
||||
Tests for the inotify-alike implementation L{allmydata.watchdog}.
|
||||
"""
|
||||
|
||||
# Note: See https://twistedmatrix.com/trac/ticket/8915 for a proposal
|
||||
# to avoid all of this duplicated code from Twisted.
|
||||
|
||||
from twisted.internet import defer, reactor
|
||||
from twisted.python import filepath, runtime
|
||||
|
||||
from allmydata.frontends.magic_folder import get_inotify_module
|
||||
from .common import (
|
||||
AsyncTestCase,
|
||||
skipIf,
|
||||
)
|
||||
inotify = get_inotify_module()
|
||||
|
||||
|
||||
@skipIf(runtime.platformType == "win32", "inotify does not yet work on windows")
|
||||
class INotifyTests(AsyncTestCase):
|
||||
"""
|
||||
Define all the tests for the basic functionality exposed by
|
||||
L{inotify.INotify}.
|
||||
"""
|
||||
def setUp(self):
|
||||
self.dirname = filepath.FilePath(self.mktemp())
|
||||
self.dirname.createDirectory()
|
||||
self.inotify = inotify.INotify()
|
||||
self.inotify.startReading()
|
||||
self.addCleanup(self.inotify.stopReading)
|
||||
return super(INotifyTests, self).setUp()
|
||||
|
||||
|
||||
def _notificationTest(self, mask, operation, expectedPath=None):
|
||||
"""
|
||||
Test notification from some filesystem operation.
|
||||
|
||||
@param mask: The event mask to use when setting up the watch.
|
||||
|
||||
@param operation: A function which will be called with the
|
||||
name of a file in the watched directory and which should
|
||||
trigger the event.
|
||||
|
||||
@param expectedPath: Optionally, the name of the path which is
|
||||
expected to come back in the notification event; this will
|
||||
also be passed to C{operation} (primarily useful when the
|
||||
operation is being done to the directory itself, not a
|
||||
file in it).
|
||||
|
||||
@return: A L{Deferred} which fires successfully when the
|
||||
expected event has been received or fails otherwise.
|
||||
"""
|
||||
if expectedPath is None:
|
||||
expectedPath = self.dirname.child("foo.bar")
|
||||
notified = defer.Deferred()
|
||||
def cbNotified(result):
|
||||
(watch, filename, events) = result
|
||||
self.assertEqual(filename.asBytesMode(), expectedPath.asBytesMode())
|
||||
self.assertTrue(events & mask)
|
||||
self.inotify.ignore(self.dirname)
|
||||
notified.addCallback(cbNotified)
|
||||
|
||||
def notify_event(*args):
|
||||
notified.callback(args)
|
||||
self.inotify.watch(
|
||||
self.dirname, mask=mask,
|
||||
callbacks=[notify_event])
|
||||
operation(expectedPath)
|
||||
return notified
|
||||
|
||||
|
||||
def test_modify(self):
|
||||
"""
|
||||
Writing to a file in a monitored directory sends an
|
||||
C{inotify.IN_MODIFY} event to the callback.
|
||||
"""
|
||||
def operation(path):
|
||||
with path.open("w") as fObj:
|
||||
fObj.write(b'foo')
|
||||
|
||||
return self._notificationTest(inotify.IN_MODIFY, operation)
|
||||
|
||||
|
||||
def test_attrib(self):
|
||||
"""
|
||||
Changing the metadata of a file in a monitored directory
|
||||
sends an C{inotify.IN_ATTRIB} event to the callback.
|
||||
"""
|
||||
def operation(path):
|
||||
# Create the file.
|
||||
path.touch()
|
||||
# Modify the file's attributes.
|
||||
path.touch()
|
||||
|
||||
return self._notificationTest(inotify.IN_ATTRIB, operation)
|
||||
|
||||
|
||||
def test_closeWrite(self):
|
||||
"""
|
||||
Closing a file which was open for writing in a monitored
|
||||
directory sends an C{inotify.IN_CLOSE_WRITE} event to the
|
||||
callback.
|
||||
"""
|
||||
def operation(path):
|
||||
path.open("w").close()
|
||||
|
||||
return self._notificationTest(inotify.IN_CLOSE_WRITE, operation)
|
||||
|
||||
|
||||
def test_delete(self):
|
||||
"""
|
||||
Deleting a file in a monitored directory sends an
|
||||
C{inotify.IN_DELETE} event to the callback.
|
||||
"""
|
||||
expectedPath = self.dirname.child("foo.bar")
|
||||
expectedPath.touch()
|
||||
notified = defer.Deferred()
|
||||
def cbNotified(result):
|
||||
(watch, filename, events) = result
|
||||
self.assertEqual(filename.asBytesMode(), expectedPath.asBytesMode())
|
||||
self.assertTrue(events & inotify.IN_DELETE)
|
||||
notified.addCallback(cbNotified)
|
||||
self.inotify.watch(
|
||||
self.dirname, mask=inotify.IN_DELETE,
|
||||
callbacks=[lambda *args: notified.callback(args)])
|
||||
expectedPath.remove()
|
||||
return notified
|
||||
|
||||
|
||||
def test_humanReadableMask(self):
|
||||
"""
|
||||
L{inotify.humanReadableMask} translates all the possible event masks to a
|
||||
human readable string.
|
||||
"""
|
||||
for mask, value in inotify._FLAG_TO_HUMAN:
|
||||
self.assertEqual(inotify.humanReadableMask(mask)[0], value)
|
||||
|
||||
checkMask = (
|
||||
inotify.IN_CLOSE_WRITE | inotify.IN_ACCESS | inotify.IN_OPEN)
|
||||
self.assertEqual(
|
||||
set(inotify.humanReadableMask(checkMask)),
|
||||
set(['close_write', 'access', 'open']))
|
||||
|
||||
|
||||
def test_noAutoAddSubdirectory(self):
|
||||
"""
|
||||
L{inotify.INotify.watch} with autoAdd==False will stop inotify
|
||||
from watching subdirectories created under the watched one.
|
||||
"""
|
||||
def _callback(wp, fp, mask):
|
||||
# We are notified before we actually process new
|
||||
# directories, so we need to defer this check.
|
||||
def _():
|
||||
try:
|
||||
self.assertFalse(self.inotify._isWatched(subdir))
|
||||
d.callback(None)
|
||||
except Exception:
|
||||
d.errback()
|
||||
reactor.callLater(0, _)
|
||||
|
||||
checkMask = inotify.IN_ISDIR | inotify.IN_CREATE
|
||||
self.inotify.watch(
|
||||
self.dirname, mask=checkMask, autoAdd=False,
|
||||
callbacks=[_callback])
|
||||
subdir = self.dirname.child('test')
|
||||
d = defer.Deferred()
|
||||
subdir.createDirectory()
|
||||
return d
|
File diff suppressed because it is too large
Load Diff
@ -1,28 +0,0 @@
|
||||
|
||||
from twisted.trial import unittest
|
||||
|
||||
from allmydata import magicpath
|
||||
|
||||
|
||||
class MagicPath(unittest.TestCase):
|
||||
tests = {
|
||||
u"Documents/work/critical-project/qed.txt": u"Documents@_work@_critical-project@_qed.txt",
|
||||
u"Documents/emails/bunnyfufu@hoppingforest.net": u"Documents@_emails@_bunnyfufu@@hoppingforest.net",
|
||||
u"foo/@/bar": u"foo@_@@@_bar",
|
||||
}
|
||||
|
||||
def test_path2magic(self):
|
||||
for test, expected in self.tests.items():
|
||||
self.failUnlessEqual(magicpath.path2magic(test), expected)
|
||||
|
||||
def test_magic2path(self):
|
||||
for expected, test in self.tests.items():
|
||||
self.failUnlessEqual(magicpath.magic2path(test), expected)
|
||||
|
||||
def test_should_ignore(self):
|
||||
self.failUnlessEqual(magicpath.should_ignore_file(u".bashrc"), True)
|
||||
self.failUnlessEqual(magicpath.should_ignore_file(u"bashrc."), False)
|
||||
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/branch/.bashrc"), True)
|
||||
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/.branch/bashrc"), True)
|
||||
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/.tree/branch/bashrc"), True)
|
||||
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/branch/bashrc"), False)
|
@ -15,9 +15,6 @@ from testtools.matchers import (
|
||||
|
||||
BLACKLIST = {
|
||||
"allmydata.test.check_load",
|
||||
"allmydata.watchdog._watchdog_541",
|
||||
"allmydata.watchdog.inotify",
|
||||
"allmydata.windows.inotify",
|
||||
"allmydata.windows.registry",
|
||||
}
|
||||
|
||||
|
@ -40,7 +40,7 @@ class TestStreamingLogs(unittest.TestCase):
|
||||
messages.append(json.loads(msg))
|
||||
proto.on("message", got_message)
|
||||
|
||||
@log_call(action_type=u"test:cli:magic-folder:cleanup")
|
||||
@log_call(action_type=u"test:cli:some-exciting-action")
|
||||
def do_a_thing():
|
||||
pass
|
||||
|
||||
|
@ -3,7 +3,6 @@ from __future__ import print_function
|
||||
import os.path, re, urllib, time, cgi
|
||||
import json
|
||||
import treq
|
||||
import mock
|
||||
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
@ -34,7 +33,6 @@ from allmydata.immutable import upload
|
||||
from allmydata.immutable.downloader.status import DownloadStatus
|
||||
from allmydata.dirnode import DirectoryNode
|
||||
from allmydata.nodemaker import NodeMaker
|
||||
from allmydata.frontends.magic_folder import QueuedItem
|
||||
from allmydata.web import status
|
||||
from allmydata.web.common import WebError, MultiFormatPage
|
||||
from allmydata.util import fileutil, base32, hashutil
|
||||
@ -66,7 +64,6 @@ from ..common_web import (
|
||||
)
|
||||
from allmydata.client import _Client, SecretHolder
|
||||
from .common import unknown_rwcap, unknown_rocap, unknown_immcap, FAVICON_MARKUP
|
||||
from ..status import FakeStatus
|
||||
|
||||
# create a fake uploader/downloader, and a couple of fake dirnodes, then
|
||||
# create a webserver that works against them
|
||||
@ -127,29 +124,6 @@ class FakeUploader(service.Service):
|
||||
return (self.helper_furl, self.helper_connected)
|
||||
|
||||
|
||||
def create_test_queued_item(relpath_u, history=[]):
|
||||
progress = mock.Mock()
|
||||
progress.progress = 100.0
|
||||
item = QueuedItem(relpath_u, progress, 1234)
|
||||
for the_status, timestamp in history:
|
||||
item.set_status(the_status, current_time=timestamp)
|
||||
return item
|
||||
|
||||
|
||||
class FakeMagicFolder(object):
|
||||
def __init__(self):
|
||||
self.uploader = FakeStatus()
|
||||
self.downloader = FakeStatus()
|
||||
|
||||
def get_public_status(self):
|
||||
return (
|
||||
True,
|
||||
[
|
||||
'a magic-folder status message'
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
def build_one_ds():
|
||||
ds = DownloadStatus("storage_index", 1234)
|
||||
now = time.time()
|
||||
@ -284,7 +258,6 @@ class FakeClient(_Client):
|
||||
# don't upcall to Client.__init__, since we only want to initialize a
|
||||
# minimal subset
|
||||
service.MultiService.__init__(self)
|
||||
self._magic_folders = dict()
|
||||
self.all_contents = {}
|
||||
self.nodeid = "fake_nodeid"
|
||||
self.nickname = u"fake_nickname \u263A"
|
||||
@ -941,79 +914,6 @@ class Web(WebMixin, WebErrorMixin, testutil.StallMixin, testutil.ReallyEqualMixi
|
||||
d.addCallback(_check)
|
||||
return d
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_magicfolder_status_bad_token(self):
|
||||
with self.assertRaises(Error):
|
||||
yield self.POST(
|
||||
'/magic_folder?t=json',
|
||||
t='json',
|
||||
name='default',
|
||||
token='not the token you are looking for',
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_magicfolder_status_wrong_folder(self):
|
||||
with self.assertRaises(Exception) as ctx:
|
||||
yield self.POST(
|
||||
'/magic_folder?t=json',
|
||||
t='json',
|
||||
name='a non-existent magic-folder',
|
||||
token=self.s.get_auth_token(),
|
||||
)
|
||||
self.assertIn(
|
||||
"Not Found",
|
||||
str(ctx.exception)
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_magicfolder_status_success(self):
|
||||
self.s._magic_folders['default'] = mf = FakeMagicFolder()
|
||||
mf.uploader.status = [
|
||||
create_test_queued_item(u"rel/uppath", [('done', 12345)])
|
||||
]
|
||||
mf.downloader.status = [
|
||||
create_test_queued_item(u"rel/downpath", [('done', 23456)])
|
||||
]
|
||||
data = yield self.POST(
|
||||
'/magic_folder?t=json',
|
||||
t='json',
|
||||
name='default',
|
||||
token=self.s.get_auth_token(),
|
||||
)
|
||||
data = json.loads(data)
|
||||
self.assertEqual(
|
||||
data,
|
||||
[
|
||||
{
|
||||
"status": "done",
|
||||
"path": "rel/uppath",
|
||||
"kind": "upload",
|
||||
"percent_done": 100.0,
|
||||
"done_at": 12345,
|
||||
"size": 1234,
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"path": "rel/downpath",
|
||||
"kind": "download",
|
||||
"percent_done": 100.0,
|
||||
"done_at": 23456,
|
||||
"size": 1234,
|
||||
},
|
||||
]
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_magicfolder_root_success(self):
|
||||
self.s._magic_folders['default'] = mf = FakeMagicFolder()
|
||||
mf.uploader.status = [
|
||||
create_test_queued_item(u"rel/path", [('done', 12345)])
|
||||
]
|
||||
data = yield self.GET(
|
||||
'/',
|
||||
)
|
||||
del data
|
||||
|
||||
def test_status(self):
|
||||
h = self.s.get_history()
|
||||
dl_num = h.list_all_download_statuses()[0].get_counter()
|
||||
|
@ -16,15 +16,6 @@ __all__ = [
|
||||
"opt_help_eliot_destinations",
|
||||
"validateInstanceOf",
|
||||
"validateSetMembership",
|
||||
"MAYBE_NOTIFY",
|
||||
"CALLBACK",
|
||||
"INOTIFY_EVENTS",
|
||||
"RELPATH",
|
||||
"VERSION",
|
||||
"LAST_UPLOADED_URI",
|
||||
"LAST_DOWNLOADED_URI",
|
||||
"LAST_DOWNLOADED_TIMESTAMP",
|
||||
"PATHINFO",
|
||||
]
|
||||
|
||||
from sys import (
|
||||
@ -51,8 +42,6 @@ from attr.validators import (
|
||||
from eliot import (
|
||||
ILogger,
|
||||
Message,
|
||||
Field,
|
||||
ActionType,
|
||||
FileDestination,
|
||||
add_destinations,
|
||||
remove_destination,
|
||||
@ -86,14 +75,6 @@ from twisted.internet.defer import (
|
||||
)
|
||||
from twisted.application.service import Service
|
||||
|
||||
|
||||
from .fileutil import (
|
||||
PathInfo,
|
||||
)
|
||||
from .fake_inotify import (
|
||||
humanReadableMask,
|
||||
)
|
||||
|
||||
def validateInstanceOf(t):
|
||||
"""
|
||||
Return an Eliot validator that requires values to be instances of ``t``.
|
||||
@ -112,72 +93,6 @@ def validateSetMembership(s):
|
||||
raise ValidationError("{} not in {}".format(v, s))
|
||||
return validator
|
||||
|
||||
RELPATH = Field.for_types(
|
||||
u"relpath",
|
||||
[unicode],
|
||||
u"The relative path of a file in a magic-folder.",
|
||||
)
|
||||
|
||||
VERSION = Field.for_types(
|
||||
u"version",
|
||||
[int, long],
|
||||
u"The version of the file.",
|
||||
)
|
||||
|
||||
LAST_UPLOADED_URI = Field.for_types(
|
||||
u"last_uploaded_uri",
|
||||
[unicode, bytes, None],
|
||||
u"The filecap to which this version of this file was uploaded.",
|
||||
)
|
||||
|
||||
LAST_DOWNLOADED_URI = Field.for_types(
|
||||
u"last_downloaded_uri",
|
||||
[unicode, bytes, None],
|
||||
u"The filecap from which the previous version of this file was downloaded.",
|
||||
)
|
||||
|
||||
LAST_DOWNLOADED_TIMESTAMP = Field.for_types(
|
||||
u"last_downloaded_timestamp",
|
||||
[float, int, long],
|
||||
u"(XXX probably not really, don't trust this) The timestamp of the last download of this file.",
|
||||
)
|
||||
|
||||
PATHINFO = Field(
|
||||
u"pathinfo",
|
||||
lambda v: None if v is None else {
|
||||
"isdir": v.isdir,
|
||||
"isfile": v.isfile,
|
||||
"islink": v.islink,
|
||||
"exists": v.exists,
|
||||
"size": v.size,
|
||||
"mtime_ns": v.mtime_ns,
|
||||
"ctime_ns": v.ctime_ns,
|
||||
},
|
||||
u"The metadata for this version of this file.",
|
||||
validateInstanceOf((type(None), PathInfo)),
|
||||
)
|
||||
|
||||
INOTIFY_EVENTS = Field(
|
||||
u"inotify_events",
|
||||
humanReadableMask,
|
||||
u"Details about a filesystem event generating a notification event.",
|
||||
validateInstanceOf((int, long)),
|
||||
)
|
||||
|
||||
MAYBE_NOTIFY = ActionType(
|
||||
u"filesystem:notification:maybe-notify",
|
||||
[],
|
||||
[],
|
||||
u"A filesystem event is being considered for dispatch to an application handler.",
|
||||
)
|
||||
|
||||
CALLBACK = ActionType(
|
||||
u"filesystem:notification:callback",
|
||||
[INOTIFY_EVENTS],
|
||||
[],
|
||||
u"A filesystem event is being dispatched to an application callback."
|
||||
)
|
||||
|
||||
def eliot_logging_service(reactor, destinations):
|
||||
"""
|
||||
Parse the given Eliot destination descriptions and return an ``IService``
|
||||
|
@ -1,109 +0,0 @@
|
||||
|
||||
# Most of this is copied from Twisted 11.0. The reason for this hack is that
|
||||
# twisted.internet.inotify can't be imported when the platform does not support inotify.
|
||||
|
||||
import six
|
||||
|
||||
if six.PY3:
|
||||
long = int
|
||||
|
||||
# from /usr/src/linux/include/linux/inotify.h
|
||||
|
||||
IN_ACCESS = long(0x00000001) # File was accessed
|
||||
IN_MODIFY = long(0x00000002) # File was modified
|
||||
IN_ATTRIB = long(0x00000004) # Metadata changed
|
||||
IN_CLOSE_WRITE = long(0x00000008) # Writeable file was closed
|
||||
IN_CLOSE_NOWRITE = long(0x00000010) # Unwriteable file closed
|
||||
IN_OPEN = long(0x00000020) # File was opened
|
||||
IN_MOVED_FROM = long(0x00000040) # File was moved from X
|
||||
IN_MOVED_TO = long(0x00000080) # File was moved to Y
|
||||
IN_CREATE = long(0x00000100) # Subfile was created
|
||||
IN_DELETE = long(0x00000200) # Subfile was delete
|
||||
IN_DELETE_SELF = long(0x00000400) # Self was deleted
|
||||
IN_MOVE_SELF = long(0x00000800) # Self was moved
|
||||
IN_UNMOUNT = long(0x00002000) # Backing fs was unmounted
|
||||
IN_Q_OVERFLOW = long(0x00004000) # Event queued overflowed
|
||||
IN_IGNORED = long(0x00008000) # File was ignored
|
||||
|
||||
IN_ONLYDIR = 0x01000000 # only watch the path if it is a directory
|
||||
IN_DONT_FOLLOW = 0x02000000 # don't follow a sym link
|
||||
IN_MASK_ADD = 0x20000000 # add to the mask of an already existing watch
|
||||
IN_ISDIR = 0x40000000 # event occurred against dir
|
||||
IN_ONESHOT = 0x80000000 # only send event once
|
||||
|
||||
IN_CLOSE = IN_CLOSE_WRITE | IN_CLOSE_NOWRITE # closes
|
||||
IN_MOVED = IN_MOVED_FROM | IN_MOVED_TO # moves
|
||||
IN_CHANGED = IN_MODIFY | IN_ATTRIB # changes
|
||||
|
||||
IN_WATCH_MASK = (IN_MODIFY | IN_ATTRIB |
|
||||
IN_CREATE | IN_DELETE |
|
||||
IN_DELETE_SELF | IN_MOVE_SELF |
|
||||
IN_UNMOUNT | IN_MOVED_FROM | IN_MOVED_TO)
|
||||
|
||||
|
||||
_FLAG_TO_HUMAN = [
|
||||
(IN_ACCESS, 'access'),
|
||||
(IN_MODIFY, 'modify'),
|
||||
(IN_ATTRIB, 'attrib'),
|
||||
(IN_CLOSE_WRITE, 'close_write'),
|
||||
(IN_CLOSE_NOWRITE, 'close_nowrite'),
|
||||
(IN_OPEN, 'open'),
|
||||
(IN_MOVED_FROM, 'moved_from'),
|
||||
(IN_MOVED_TO, 'moved_to'),
|
||||
(IN_CREATE, 'create'),
|
||||
(IN_DELETE, 'delete'),
|
||||
(IN_DELETE_SELF, 'delete_self'),
|
||||
(IN_MOVE_SELF, 'move_self'),
|
||||
(IN_UNMOUNT, 'unmount'),
|
||||
(IN_Q_OVERFLOW, 'queue_overflow'),
|
||||
(IN_IGNORED, 'ignored'),
|
||||
(IN_ONLYDIR, 'only_dir'),
|
||||
(IN_DONT_FOLLOW, 'dont_follow'),
|
||||
(IN_MASK_ADD, 'mask_add'),
|
||||
(IN_ISDIR, 'is_dir'),
|
||||
(IN_ONESHOT, 'one_shot')
|
||||
]
|
||||
|
||||
|
||||
|
||||
def humanReadableMask(mask):
|
||||
"""
|
||||
Auxiliary function that converts an hexadecimal mask into a series
|
||||
of human readable flags.
|
||||
"""
|
||||
s = []
|
||||
for k, v in _FLAG_TO_HUMAN:
|
||||
if k & mask:
|
||||
s.append(v)
|
||||
return s
|
||||
|
||||
|
||||
from eliot import start_action
|
||||
|
||||
# This class is not copied from Twisted; it acts as a mock.
|
||||
class INotify(object):
|
||||
def startReading(self):
|
||||
pass
|
||||
|
||||
def stopReading(self):
|
||||
pass
|
||||
|
||||
def loseConnection(self):
|
||||
pass
|
||||
|
||||
def watch(self, filepath, mask=IN_WATCH_MASK, autoAdd=False, callbacks=None, recursive=False):
|
||||
self.callbacks = callbacks
|
||||
|
||||
def event(self, filepath, mask):
|
||||
with start_action(action_type=u"fake-inotify:event", path=filepath.asTextMode().path, mask=mask):
|
||||
for cb in self.callbacks:
|
||||
cb(None, filepath, mask)
|
||||
|
||||
|
||||
__all__ = ["INotify", "humanReadableMask", "IN_WATCH_MASK", "IN_ACCESS",
|
||||
"IN_MODIFY", "IN_ATTRIB", "IN_CLOSE_NOWRITE", "IN_CLOSE_WRITE",
|
||||
"IN_OPEN", "IN_MOVED_FROM", "IN_MOVED_TO", "IN_CREATE",
|
||||
"IN_DELETE", "IN_DELETE_SELF", "IN_MOVE_SELF", "IN_UNMOUNT",
|
||||
"IN_Q_OVERFLOW", "IN_IGNORED", "IN_ONLYDIR", "IN_DONT_FOLLOW",
|
||||
"IN_MASK_ADD", "IN_ISDIR", "IN_ONESHOT", "IN_CLOSE",
|
||||
"IN_MOVED", "IN_CHANGED"]
|
@ -1,16 +0,0 @@
|
||||
"""
|
||||
Hotfix for https://github.com/gorakhargosh/watchdog/issues/541
|
||||
"""
|
||||
|
||||
from watchdog.observers.fsevents import FSEventsEmitter
|
||||
|
||||
# The class object has already been bundled up in the default arguments to
|
||||
# FSEventsObserver.__init__. So mutate the class object (instead of replacing
|
||||
# it with a safer version).
|
||||
original_on_thread_stop = FSEventsEmitter.on_thread_stop
|
||||
def safe_on_thread_stop(self):
|
||||
if self.is_alive():
|
||||
return original_on_thread_stop(self)
|
||||
|
||||
def patch():
|
||||
FSEventsEmitter.on_thread_stop = safe_on_thread_stop
|
@ -1,212 +0,0 @@
|
||||
|
||||
"""
|
||||
An implementation of an inotify-like interface on top of the ``watchdog`` library.
|
||||
"""
|
||||
|
||||
from __future__ import (
|
||||
unicode_literals,
|
||||
print_function,
|
||||
absolute_import,
|
||||
division,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"humanReadableMask", "INotify",
|
||||
"IN_WATCH_MASK", "IN_ACCESS", "IN_MODIFY", "IN_ATTRIB", "IN_CLOSE_NOWRITE",
|
||||
"IN_CLOSE_WRITE", "IN_OPEN", "IN_MOVED_FROM", "IN_MOVED_TO", "IN_CREATE",
|
||||
"IN_DELETE", "IN_DELETE_SELF", "IN_MOVE_SELF", "IN_UNMOUNT", "IN_ONESHOT",
|
||||
"IN_Q_OVERFLOW", "IN_IGNORED", "IN_ONLYDIR", "IN_DONT_FOLLOW", "IN_MOVED",
|
||||
"IN_MASK_ADD", "IN_ISDIR", "IN_CLOSE", "IN_CHANGED", "_FLAG_TO_HUMAN",
|
||||
]
|
||||
|
||||
from watchdog.observers import Observer
|
||||
from watchdog.events import (
|
||||
FileSystemEvent,
|
||||
FileSystemEventHandler, DirCreatedEvent, FileCreatedEvent,
|
||||
DirDeletedEvent, FileDeletedEvent, FileModifiedEvent
|
||||
)
|
||||
|
||||
from twisted.internet import reactor
|
||||
from twisted.python.filepath import FilePath
|
||||
from allmydata.util.fileutil import abspath_expanduser_unicode
|
||||
|
||||
from eliot import (
|
||||
ActionType,
|
||||
Message,
|
||||
Field,
|
||||
preserve_context,
|
||||
start_action,
|
||||
)
|
||||
|
||||
from allmydata.util.pollmixin import PollMixin
|
||||
from allmydata.util.assertutil import _assert, precondition
|
||||
from allmydata.util import encodingutil
|
||||
from allmydata.util.fake_inotify import humanReadableMask, \
|
||||
IN_WATCH_MASK, IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_NOWRITE, IN_CLOSE_WRITE, \
|
||||
IN_OPEN, IN_MOVED_FROM, IN_MOVED_TO, IN_CREATE, IN_DELETE, IN_DELETE_SELF, \
|
||||
IN_MOVE_SELF, IN_UNMOUNT, IN_Q_OVERFLOW, IN_IGNORED, IN_ONLYDIR, IN_DONT_FOLLOW, \
|
||||
IN_MASK_ADD, IN_ISDIR, IN_ONESHOT, IN_CLOSE, IN_MOVED, IN_CHANGED, \
|
||||
_FLAG_TO_HUMAN
|
||||
|
||||
from ..util.eliotutil import (
|
||||
MAYBE_NOTIFY,
|
||||
CALLBACK,
|
||||
validateInstanceOf,
|
||||
)
|
||||
|
||||
from . import _watchdog_541
|
||||
|
||||
_watchdog_541.patch()
|
||||
|
||||
NOT_STARTED = "NOT_STARTED"
|
||||
STARTED = "STARTED"
|
||||
STOPPING = "STOPPING"
|
||||
STOPPED = "STOPPED"
|
||||
|
||||
_PATH = Field.for_types(
|
||||
u"path",
|
||||
[bytes, unicode],
|
||||
u"The path an inotify event concerns.",
|
||||
)
|
||||
|
||||
_EVENT = Field(
|
||||
u"event",
|
||||
lambda e: e.__class__.__name__,
|
||||
u"The watchdog event that has taken place.",
|
||||
validateInstanceOf(FileSystemEvent),
|
||||
)
|
||||
|
||||
ANY_INOTIFY_EVENT = ActionType(
|
||||
u"watchdog:inotify:any-event",
|
||||
[_PATH, _EVENT],
|
||||
[],
|
||||
u"An inotify event is being dispatched.",
|
||||
)
|
||||
|
||||
class INotifyEventHandler(FileSystemEventHandler):
|
||||
def __init__(self, path, mask, callbacks, pending_delay):
|
||||
FileSystemEventHandler.__init__(self)
|
||||
self._path = path
|
||||
self._mask = mask
|
||||
self._callbacks = callbacks
|
||||
self._pending_delay = pending_delay
|
||||
self._pending = set()
|
||||
|
||||
def _maybe_notify(self, path, event):
|
||||
with MAYBE_NOTIFY():
|
||||
event_mask = IN_CHANGED
|
||||
if isinstance(event, FileModifiedEvent):
|
||||
event_mask = event_mask | IN_CLOSE_WRITE
|
||||
event_mask = event_mask | IN_MODIFY
|
||||
if isinstance(event, (DirCreatedEvent, FileCreatedEvent)):
|
||||
# For our purposes, IN_CREATE is irrelevant.
|
||||
event_mask = event_mask | IN_CLOSE_WRITE
|
||||
if isinstance(event, (DirDeletedEvent, FileDeletedEvent)):
|
||||
event_mask = event_mask | IN_DELETE
|
||||
if event.is_directory:
|
||||
event_mask = event_mask | IN_ISDIR
|
||||
if not (self._mask & event_mask):
|
||||
return
|
||||
for cb in self._callbacks:
|
||||
try:
|
||||
with CALLBACK(inotify_events=event_mask):
|
||||
cb(None, FilePath(path), event_mask)
|
||||
except:
|
||||
# Eliot already logged the exception for us.
|
||||
# There's nothing else we can do about it here.
|
||||
pass
|
||||
|
||||
def process(self, event):
|
||||
event_filepath_u = event.src_path.decode(encodingutil.get_filesystem_encoding())
|
||||
event_filepath_u = abspath_expanduser_unicode(event_filepath_u, base=self._path)
|
||||
|
||||
if event_filepath_u == self._path:
|
||||
# ignore events for parent directory
|
||||
return
|
||||
|
||||
self._maybe_notify(event_filepath_u, event)
|
||||
|
||||
def on_any_event(self, event):
|
||||
with ANY_INOTIFY_EVENT(path=event.src_path, event=event):
|
||||
reactor.callFromThread(
|
||||
preserve_context(self.process),
|
||||
event,
|
||||
)
|
||||
|
||||
|
||||
class INotify(PollMixin):
|
||||
"""
|
||||
I am a prototype INotify, made to work on Mac OS X (Darwin)
|
||||
using the Watchdog python library. This is actually a simplified subset
|
||||
of the twisted Linux INotify class because we do not utilize the watch mask
|
||||
and only implement the following methods:
|
||||
- watch
|
||||
- startReading
|
||||
- stopReading
|
||||
- wait_until_stopped
|
||||
- set_pending_delay
|
||||
"""
|
||||
def __init__(self):
|
||||
self._pending_delay = 1.0
|
||||
self.recursive_includes_new_subdirectories = False
|
||||
self._callbacks = {}
|
||||
self._watches = {}
|
||||
self._state = NOT_STARTED
|
||||
self._observer = Observer(timeout=self._pending_delay)
|
||||
|
||||
def set_pending_delay(self, delay):
|
||||
Message.log(message_type=u"watchdog:inotify:set-pending-delay", delay=delay)
|
||||
assert self._state != STARTED
|
||||
self._pending_delay = delay
|
||||
|
||||
def startReading(self):
|
||||
with start_action(action_type=u"watchdog:inotify:start-reading"):
|
||||
assert self._state != STARTED
|
||||
try:
|
||||
# XXX twisted.internet.inotify doesn't require watches to
|
||||
# be set before startReading is called.
|
||||
# _assert(len(self._callbacks) != 0, "no watch set")
|
||||
self._observer.start()
|
||||
self._state = STARTED
|
||||
except:
|
||||
self._state = STOPPED
|
||||
raise
|
||||
|
||||
def stopReading(self):
|
||||
with start_action(action_type=u"watchdog:inotify:stop-reading"):
|
||||
if self._state != STOPPED:
|
||||
self._state = STOPPING
|
||||
self._observer.unschedule_all()
|
||||
self._observer.stop()
|
||||
self._observer.join()
|
||||
self._state = STOPPED
|
||||
|
||||
def wait_until_stopped(self):
|
||||
return self.poll(lambda: self._state == STOPPED)
|
||||
|
||||
def _isWatched(self, path_u):
|
||||
return path_u in self._callbacks.keys()
|
||||
|
||||
def ignore(self, path):
|
||||
path_u = path.path
|
||||
self._observer.unschedule(self._watches[path_u])
|
||||
del self._callbacks[path_u]
|
||||
del self._watches[path_u]
|
||||
|
||||
def watch(self, path, mask=IN_WATCH_MASK, autoAdd=False, callbacks=None, recursive=False):
|
||||
precondition(isinstance(autoAdd, bool), autoAdd=autoAdd)
|
||||
precondition(isinstance(recursive, bool), recursive=recursive)
|
||||
assert autoAdd == False
|
||||
|
||||
path_u = path.path
|
||||
if not isinstance(path_u, unicode):
|
||||
path_u = path_u.decode('utf-8')
|
||||
_assert(isinstance(path_u, unicode), path_u=path_u)
|
||||
|
||||
if path_u not in self._callbacks.keys():
|
||||
self._callbacks[path_u] = callbacks or []
|
||||
self._watches[path_u] = self._observer.schedule(
|
||||
INotifyEventHandler(path_u, mask, self._callbacks[path_u], self._pending_delay),
|
||||
path=path_u,
|
||||
recursive=False,
|
||||
)
|
@ -1,52 +0,0 @@
|
||||
import json
|
||||
|
||||
from allmydata.web.common import TokenOnlyWebApi, get_arg, WebError
|
||||
|
||||
|
||||
class MagicFolderWebApi(TokenOnlyWebApi):
|
||||
"""
|
||||
I provide the web-based API for Magic Folder status etc.
|
||||
"""
|
||||
|
||||
def __init__(self, client):
|
||||
TokenOnlyWebApi.__init__(self, client)
|
||||
self.client = client
|
||||
|
||||
def post_json(self, req):
|
||||
req.setHeader("content-type", "application/json")
|
||||
nick = get_arg(req, 'name', 'default')
|
||||
|
||||
try:
|
||||
magic_folder = self.client._magic_folders[nick]
|
||||
except KeyError:
|
||||
raise WebError(
|
||||
"No such magic-folder '{}'".format(nick),
|
||||
404,
|
||||
)
|
||||
|
||||
data = []
|
||||
for item in magic_folder.uploader.get_status():
|
||||
d = dict(
|
||||
path=item.relpath_u,
|
||||
status=item.status_history()[-1][0],
|
||||
kind='upload',
|
||||
)
|
||||
for (status, ts) in item.status_history():
|
||||
d[status + '_at'] = ts
|
||||
d['percent_done'] = item.progress.progress
|
||||
d['size'] = item.size
|
||||
data.append(d)
|
||||
|
||||
for item in magic_folder.downloader.get_status():
|
||||
d = dict(
|
||||
path=item.relpath_u,
|
||||
status=item.status_history()[-1][0],
|
||||
kind='download',
|
||||
)
|
||||
for (status, ts) in item.status_history():
|
||||
d[status + '_at'] = ts
|
||||
d['percent_done'] = item.progress.progress
|
||||
d['size'] = item.size
|
||||
data.append(d)
|
||||
|
||||
return json.dumps(data)
|
@ -21,7 +21,7 @@ from allmydata.version_checks import get_package_versions_string
|
||||
from allmydata.util import log
|
||||
from allmydata.interfaces import IFileNode
|
||||
from allmydata.web import filenode, directory, unlinked, status
|
||||
from allmydata.web import storage, magic_folder
|
||||
from allmydata.web import storage
|
||||
from allmydata.web.common import (
|
||||
abbreviate_size,
|
||||
getxmlfile,
|
||||
@ -208,9 +208,6 @@ class Root(MultiFormatPage):
|
||||
self.putChild("uri", URIHandler(client))
|
||||
self.putChild("cap", URIHandler(client))
|
||||
|
||||
# handler for "/magic_folder" URIs
|
||||
self.putChild("magic_folder", magic_folder.MagicFolderWebApi(client))
|
||||
|
||||
# Handler for everything beneath "/private", an area of the resource
|
||||
# hierarchy which is only accessible with the private per-node API
|
||||
# auth token.
|
||||
@ -307,30 +304,6 @@ class Root(MultiFormatPage):
|
||||
return description
|
||||
|
||||
|
||||
def data_magic_folders(self, ctx, data):
|
||||
return self.client._magic_folders.keys()
|
||||
|
||||
def render_magic_folder_row(self, ctx, data):
|
||||
magic_folder = self.client._magic_folders[data]
|
||||
(ok, messages) = magic_folder.get_public_status()
|
||||
ctx.fillSlots("magic_folder_name", data)
|
||||
if ok:
|
||||
ctx.fillSlots("magic_folder_status", "yes")
|
||||
ctx.fillSlots("magic_folder_status_alt", "working")
|
||||
else:
|
||||
ctx.fillSlots("magic_folder_status", "no")
|
||||
ctx.fillSlots("magic_folder_status_alt", "not working")
|
||||
|
||||
status = T.ul(class_="magic-folder-status")
|
||||
for msg in messages:
|
||||
status[T.li[str(msg)]]
|
||||
return ctx.tag[status]
|
||||
|
||||
def render_magic_folder(self, ctx, data):
|
||||
if not self.client._magic_folders:
|
||||
return T.p()
|
||||
return ctx.tag
|
||||
|
||||
def render_services(self, ctx, data):
|
||||
ul = T.ul()
|
||||
try:
|
||||
|
@ -53,11 +53,6 @@ body {
|
||||
.connection-status {
|
||||
}
|
||||
|
||||
.magic-folder-status {
|
||||
clear: left;
|
||||
margin-left: 40px; /* width of status-indicator + margins */
|
||||
}
|
||||
|
||||
.furl {
|
||||
font-size: 0.8em;
|
||||
word-wrap: break-word;
|
||||
|
@ -1,39 +1,27 @@
|
||||
<html xmlns:n="http://nevow.com/ns/nevow/0.1">
|
||||
<html xmlns:t="http://twistedmatrix.com/ns/twisted.web.template/0.1">
|
||||
<head>
|
||||
<title>Tahoe-LAFS - Operational Statistics</title>
|
||||
<link href="/tahoe.css" rel="stylesheet" type="text/css"/>
|
||||
<link href="/icon.png" rel="shortcut icon" />
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
|
||||
</head>
|
||||
<body n:data="get_stats">
|
||||
<body>
|
||||
|
||||
<h1>Operational Statistics</h1>
|
||||
|
||||
<h2>General</h2>
|
||||
|
||||
<ul>
|
||||
<li>Load Average: <span n:render="load_average" /></li>
|
||||
<li>Peak Load: <span n:render="peak_load" /></li>
|
||||
<li>Files Uploaded (immutable): <span n:render="uploads" /></li>
|
||||
<li>Files Downloaded (immutable): <span n:render="downloads" /></li>
|
||||
<li>Files Published (mutable): <span n:render="publishes" /></li>
|
||||
<li>Files Retrieved (mutable): <span n:render="retrieves" /></li>
|
||||
</ul>
|
||||
|
||||
<h2>Magic Folder</h2>
|
||||
|
||||
<ul>
|
||||
<li>Local Directories Monitored: <span n:render="magic_uploader_monitored" /></li>
|
||||
<li>Files Uploaded: <span n:render="magic_uploader_succeeded" /></li>
|
||||
<li>Files Queued for Upload: <span n:render="magic_uploader_queued" /></li>
|
||||
<li>Failed Uploads: <span n:render="magic_uploader_failed" /></li>
|
||||
<li>Files Downloaded: <span n:render="magic_downloader_succeeded" /></li>
|
||||
<li>Files Queued for Download: <span n:render="magic_downloader_queued" /></li>
|
||||
<li>Failed Downloads: <span n:render="magic_downloader_failed" /></li>
|
||||
</ul>
|
||||
<ul>
|
||||
<li>Load Average: <t:transparent t:render="load_average" /></li>
|
||||
<li>Peak Load: <t:transparent t:render="peak_load" /></li>
|
||||
<li>Files Uploaded (immutable): <t:transparent t:render="uploads" /></li>
|
||||
<li>Files Downloaded (immutable): <t:transparent t:render="downloads" /></li>
|
||||
<li>Files Published (mutable): <t:transparent t:render="publishes" /></li>
|
||||
<li>Files Retrieved (mutable): <t:transparent t:render="retrieves" /></li>
|
||||
</ul>
|
||||
|
||||
<h2>Raw Stats:</h2>
|
||||
<pre n:render="raw" />
|
||||
<pre t:render="raw" />
|
||||
|
||||
<div>Return to the <a href="/">Welcome Page</a></div>
|
||||
|
||||
|
@ -2,7 +2,14 @@
|
||||
import pprint, itertools, hashlib
|
||||
import json
|
||||
from twisted.internet import defer
|
||||
from twisted.python.filepath import FilePath
|
||||
from twisted.web.resource import Resource
|
||||
from twisted.web.template import (
|
||||
Element,
|
||||
XMLFile,
|
||||
renderer,
|
||||
renderElement,
|
||||
)
|
||||
from nevow import rend, tags as T
|
||||
from allmydata.util import base32, idlib
|
||||
from allmydata.web.common import (
|
||||
@ -14,6 +21,7 @@ from allmydata.web.common import (
|
||||
compute_rate,
|
||||
render_time,
|
||||
MultiFormatPage,
|
||||
MultiFormatResource,
|
||||
)
|
||||
from allmydata.interfaces import IUploadStatus, IDownloadStatus, \
|
||||
IPublishStatus, IRetrieveStatus, IServermapUpdaterStatus
|
||||
@ -1164,82 +1172,93 @@ class HelperStatus(MultiFormatPage):
|
||||
def render_upload_bytes_encoded(self, ctx, data):
|
||||
return str(data["chk_upload_helper.encoded_bytes"])
|
||||
|
||||
# Render "/statistics" page.
|
||||
class Statistics(MultiFormatResource):
|
||||
"""Class that renders "/statistics" page.
|
||||
|
||||
class Statistics(MultiFormatPage):
|
||||
docFactory = getxmlfile("statistics.xhtml")
|
||||
:param _allmydata.stats.StatsProvider provider: node statistics
|
||||
provider.
|
||||
"""
|
||||
|
||||
def __init__(self, provider):
|
||||
rend.Page.__init__(self, provider)
|
||||
self.provider = provider
|
||||
super(Statistics, self).__init__()
|
||||
self._provider = provider
|
||||
|
||||
def render_HTML(self, req):
|
||||
return renderElement(req, StatisticsElement(self._provider))
|
||||
|
||||
def render_JSON(self, req):
|
||||
stats = self.provider.get_stats()
|
||||
stats = self._provider.get_stats()
|
||||
req.setHeader("content-type", "text/plain")
|
||||
return json.dumps(stats, indent=1) + "\n"
|
||||
|
||||
def data_get_stats(self, ctx, data):
|
||||
return self.provider.get_stats()
|
||||
class StatisticsElement(Element):
|
||||
|
||||
def render_load_average(self, ctx, data):
|
||||
return str(data["stats"].get("load_monitor.avg_load"))
|
||||
loader = XMLFile(FilePath(__file__).sibling("statistics.xhtml"))
|
||||
|
||||
def render_peak_load(self, ctx, data):
|
||||
return str(data["stats"].get("load_monitor.max_load"))
|
||||
def __init__(self, provider):
|
||||
super(StatisticsElement, self).__init__()
|
||||
# provider.get_stats() returns a dict of the below form, for
|
||||
# example (there's often more data than this):
|
||||
#
|
||||
# {
|
||||
# 'stats': {
|
||||
# 'storage_server.disk_used': 809601609728,
|
||||
# 'storage_server.accepting_immutable_shares': 1,
|
||||
# 'storage_server.disk_free_for_root': 131486851072,
|
||||
# 'storage_server.reserved_space': 1000000000,
|
||||
# 'node.uptime': 0.16520118713378906,
|
||||
# 'storage_server.disk_total': 941088460800,
|
||||
# 'cpu_monitor.total': 0.004513999999999907,
|
||||
# 'storage_server.disk_avail': 82610759168,
|
||||
# 'storage_server.allocated': 0,
|
||||
# 'storage_server.disk_free_for_nonroot': 83610759168 },
|
||||
# 'counters': {
|
||||
# 'uploader.files_uploaded': 0,
|
||||
# 'uploader.bytes_uploaded': 0,
|
||||
# ... }
|
||||
# }
|
||||
#
|
||||
# Note that `counters` can be empty.
|
||||
self._stats = provider.get_stats()
|
||||
|
||||
def render_uploads(self, ctx, data):
|
||||
files = data["counters"].get("uploader.files_uploaded", 0)
|
||||
bytes = data["counters"].get("uploader.bytes_uploaded", 0)
|
||||
return ("%s files / %s bytes (%s)" %
|
||||
(files, bytes, abbreviate_size(bytes)))
|
||||
@renderer
|
||||
def load_average(self, req, tag):
|
||||
return tag(str(self._stats["stats"].get("load_monitor.avg_load")))
|
||||
|
||||
def render_downloads(self, ctx, data):
|
||||
files = data["counters"].get("downloader.files_downloaded", 0)
|
||||
bytes = data["counters"].get("downloader.bytes_downloaded", 0)
|
||||
return ("%s files / %s bytes (%s)" %
|
||||
(files, bytes, abbreviate_size(bytes)))
|
||||
@renderer
|
||||
def peak_load(self, req, tag):
|
||||
return tag(str(self._stats["stats"].get("load_monitor.max_load")))
|
||||
|
||||
def render_publishes(self, ctx, data):
|
||||
files = data["counters"].get("mutable.files_published", 0)
|
||||
bytes = data["counters"].get("mutable.bytes_published", 0)
|
||||
return "%s files / %s bytes (%s)" % (files, bytes,
|
||||
abbreviate_size(bytes))
|
||||
@renderer
|
||||
def uploads(self, req, tag):
|
||||
files = self._stats["counters"].get("uploader.files_uploaded", 0)
|
||||
bytes = self._stats["counters"].get("uploader.bytes_uploaded", 0)
|
||||
return tag(("%s files / %s bytes (%s)" %
|
||||
(files, bytes, abbreviate_size(bytes))))
|
||||
|
||||
def render_retrieves(self, ctx, data):
|
||||
files = data["counters"].get("mutable.files_retrieved", 0)
|
||||
bytes = data["counters"].get("mutable.bytes_retrieved", 0)
|
||||
return "%s files / %s bytes (%s)" % (files, bytes,
|
||||
abbreviate_size(bytes))
|
||||
@renderer
|
||||
def downloads(self, req, tag):
|
||||
files = self._stats["counters"].get("downloader.files_downloaded", 0)
|
||||
bytes = self._stats["counters"].get("downloader.bytes_downloaded", 0)
|
||||
return tag("%s files / %s bytes (%s)" %
|
||||
(files, bytes, abbreviate_size(bytes)))
|
||||
|
||||
def render_magic_uploader_monitored(self, ctx, data):
|
||||
dirs = data["counters"].get("magic_folder.uploader.dirs_monitored", 0)
|
||||
return "%s directories" % (dirs,)
|
||||
@renderer
|
||||
def publishes(self, req, tag):
|
||||
files = self._stats["counters"].get("mutable.files_published", 0)
|
||||
bytes = self._stats["counters"].get("mutable.bytes_published", 0)
|
||||
return tag("%s files / %s bytes (%s)" % (files, bytes,
|
||||
abbreviate_size(bytes)))
|
||||
|
||||
def render_magic_uploader_succeeded(self, ctx, data):
|
||||
# TODO: bytes uploaded
|
||||
files = data["counters"].get("magic_folder.uploader.objects_succeeded", 0)
|
||||
return "%s files" % (files,)
|
||||
@renderer
|
||||
def retrieves(self, req, tag):
|
||||
files = self._stats["counters"].get("mutable.files_retrieved", 0)
|
||||
bytes = self._stats["counters"].get("mutable.bytes_retrieved", 0)
|
||||
return tag("%s files / %s bytes (%s)" % (files, bytes,
|
||||
abbreviate_size(bytes)))
|
||||
|
||||
def render_magic_uploader_queued(self, ctx, data):
|
||||
files = data["counters"].get("magic_folder.uploader.objects_queued", 0)
|
||||
return "%s files" % (files,)
|
||||
|
||||
def render_magic_uploader_failed(self, ctx, data):
|
||||
files = data["counters"].get("magic_folder.uploader.objects_failed", 0)
|
||||
return "%s files" % (files,)
|
||||
|
||||
def render_magic_downloader_succeeded(self, ctx, data):
|
||||
# TODO: bytes uploaded
|
||||
files = data["counters"].get("magic_folder.downloader.objects_succeeded", 0)
|
||||
return "%s files" % (files,)
|
||||
|
||||
def render_magic_downloader_queued(self, ctx, data):
|
||||
files = data["counters"].get("magic_folder.downloader.objects_queued", 0)
|
||||
return "%s files" % (files,)
|
||||
|
||||
def render_magic_downloader_failed(self, ctx, data):
|
||||
files = data["counters"].get("magic_folder.downloader.objects_failed", 0)
|
||||
return "%s files" % (files,)
|
||||
|
||||
def render_raw(self, ctx, data):
|
||||
raw = pprint.pformat(data)
|
||||
return ctx.tag[raw]
|
||||
@renderer
|
||||
def raw(self, req, tag):
|
||||
raw = pprint.pformat(self._stats)
|
||||
return tag(raw)
|
||||
|
@ -159,13 +159,6 @@
|
||||
</div><!--/row-->
|
||||
</div>
|
||||
|
||||
<div n:render="magic_folder" class="row-fluid">
|
||||
<h2>Magic Folders</h2>
|
||||
<div n:render="sequence" n:data="magic_folders">
|
||||
<div n:pattern="item" n:render="magic_folder_row"><div class="status-indicator"><img><n:attr name="src">img/connected-<n:slot name="magic_folder_status" />.png</n:attr><n:attr name="alt"><n:slot name="magic_folder_status_alt" /></n:attr></img></div><h3><n:slot name="magic_folder_name" /></h3></div>
|
||||
</div>
|
||||
</div><!--/row-->
|
||||
|
||||
<div class="row-fluid">
|
||||
<h2>
|
||||
Connected to <span n:render="string" n:data="connected_storage_servers" />
|
||||
|
@ -1,379 +0,0 @@
|
||||
|
||||
# Windows near-equivalent to twisted.internet.inotify
|
||||
# This should only be imported on Windows.
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import six
|
||||
import os, sys
|
||||
|
||||
from eliot import (
|
||||
start_action,
|
||||
Message,
|
||||
log_call,
|
||||
)
|
||||
|
||||
from twisted.internet import reactor
|
||||
from twisted.internet.threads import deferToThread
|
||||
|
||||
from allmydata.util.fake_inotify import humanReadableMask, \
|
||||
IN_WATCH_MASK, IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_NOWRITE, IN_CLOSE_WRITE, \
|
||||
IN_OPEN, IN_MOVED_FROM, IN_MOVED_TO, IN_CREATE, IN_DELETE, IN_DELETE_SELF, \
|
||||
IN_MOVE_SELF, IN_UNMOUNT, IN_Q_OVERFLOW, IN_IGNORED, IN_ONLYDIR, IN_DONT_FOLLOW, \
|
||||
IN_MASK_ADD, IN_ISDIR, IN_ONESHOT, IN_CLOSE, IN_MOVED, IN_CHANGED
|
||||
[humanReadableMask, \
|
||||
IN_WATCH_MASK, IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_NOWRITE, IN_CLOSE_WRITE, \
|
||||
IN_OPEN, IN_MOVED_FROM, IN_MOVED_TO, IN_CREATE, IN_DELETE, IN_DELETE_SELF, \
|
||||
IN_MOVE_SELF, IN_UNMOUNT, IN_Q_OVERFLOW, IN_IGNORED, IN_ONLYDIR, IN_DONT_FOLLOW, \
|
||||
IN_MASK_ADD, IN_ISDIR, IN_ONESHOT, IN_CLOSE, IN_MOVED, IN_CHANGED]
|
||||
|
||||
from allmydata.util.assertutil import _assert, precondition
|
||||
from allmydata.util.encodingutil import quote_output
|
||||
from allmydata.util import log, fileutil
|
||||
from allmydata.util.pollmixin import PollMixin
|
||||
from ..util.eliotutil import (
|
||||
MAYBE_NOTIFY,
|
||||
CALLBACK,
|
||||
)
|
||||
|
||||
from ctypes import WINFUNCTYPE, WinError, windll, POINTER, byref, create_string_buffer, \
|
||||
addressof, get_last_error
|
||||
from ctypes.wintypes import BOOL, HANDLE, DWORD, LPCWSTR, LPVOID
|
||||
|
||||
if six.PY3:
|
||||
long = int
|
||||
|
||||
# <http://msdn.microsoft.com/en-us/library/gg258116%28v=vs.85%29.aspx>
|
||||
FILE_LIST_DIRECTORY = 1
|
||||
|
||||
# <http://msdn.microsoft.com/en-us/library/aa363858%28v=vs.85%29.aspx>
|
||||
CreateFileW = WINFUNCTYPE(
|
||||
HANDLE, LPCWSTR, DWORD, DWORD, LPVOID, DWORD, DWORD, HANDLE,
|
||||
use_last_error=True
|
||||
)(("CreateFileW", windll.kernel32))
|
||||
|
||||
FILE_SHARE_READ = 0x00000001
|
||||
FILE_SHARE_WRITE = 0x00000002
|
||||
FILE_SHARE_DELETE = 0x00000004
|
||||
|
||||
OPEN_EXISTING = 3
|
||||
|
||||
FILE_FLAG_BACKUP_SEMANTICS = 0x02000000
|
||||
|
||||
# <http://msdn.microsoft.com/en-us/library/ms724211%28v=vs.85%29.aspx>
|
||||
CloseHandle = WINFUNCTYPE(
|
||||
BOOL, HANDLE,
|
||||
use_last_error=True
|
||||
)(("CloseHandle", windll.kernel32))
|
||||
|
||||
# <http://msdn.microsoft.com/en-us/library/aa365465%28v=vs.85%29.aspx>
|
||||
ReadDirectoryChangesW = WINFUNCTYPE(
|
||||
BOOL, HANDLE, LPVOID, DWORD, BOOL, DWORD, POINTER(DWORD), LPVOID, LPVOID,
|
||||
use_last_error=True
|
||||
)(("ReadDirectoryChangesW", windll.kernel32))
|
||||
|
||||
FILE_NOTIFY_CHANGE_FILE_NAME = 0x00000001
|
||||
FILE_NOTIFY_CHANGE_DIR_NAME = 0x00000002
|
||||
FILE_NOTIFY_CHANGE_ATTRIBUTES = 0x00000004
|
||||
#FILE_NOTIFY_CHANGE_SIZE = 0x00000008
|
||||
FILE_NOTIFY_CHANGE_LAST_WRITE = 0x00000010
|
||||
FILE_NOTIFY_CHANGE_LAST_ACCESS = 0x00000020
|
||||
#FILE_NOTIFY_CHANGE_CREATION = 0x00000040
|
||||
FILE_NOTIFY_CHANGE_SECURITY = 0x00000100
|
||||
|
||||
# <http://msdn.microsoft.com/en-us/library/aa364391%28v=vs.85%29.aspx>
|
||||
FILE_ACTION_ADDED = 0x00000001
|
||||
FILE_ACTION_REMOVED = 0x00000002
|
||||
FILE_ACTION_MODIFIED = 0x00000003
|
||||
FILE_ACTION_RENAMED_OLD_NAME = 0x00000004
|
||||
FILE_ACTION_RENAMED_NEW_NAME = 0x00000005
|
||||
|
||||
_action_to_string = {
|
||||
FILE_ACTION_ADDED : "FILE_ACTION_ADDED",
|
||||
FILE_ACTION_REMOVED : "FILE_ACTION_REMOVED",
|
||||
FILE_ACTION_MODIFIED : "FILE_ACTION_MODIFIED",
|
||||
FILE_ACTION_RENAMED_OLD_NAME : "FILE_ACTION_RENAMED_OLD_NAME",
|
||||
FILE_ACTION_RENAMED_NEW_NAME : "FILE_ACTION_RENAMED_NEW_NAME",
|
||||
}
|
||||
|
||||
_action_to_inotify_mask = {
|
||||
FILE_ACTION_ADDED : IN_CREATE,
|
||||
FILE_ACTION_REMOVED : IN_DELETE,
|
||||
FILE_ACTION_MODIFIED : IN_CHANGED,
|
||||
FILE_ACTION_RENAMED_OLD_NAME : IN_MOVED_FROM,
|
||||
FILE_ACTION_RENAMED_NEW_NAME : IN_MOVED_TO,
|
||||
}
|
||||
|
||||
INVALID_HANDLE_VALUE = 0xFFFFFFFF
|
||||
|
||||
FALSE = 0
|
||||
TRUE = 1
|
||||
|
||||
class Event(object):
|
||||
"""
|
||||
* action: a FILE_ACTION_* constant (not a bit mask)
|
||||
* filename: a Unicode string, giving the name relative to the watched directory
|
||||
"""
|
||||
def __init__(self, action, filename):
|
||||
self.action = action
|
||||
self.filename = filename
|
||||
|
||||
def __repr__(self):
|
||||
return "Event(%r, %r)" % (_action_to_string.get(self.action, self.action), self.filename)
|
||||
|
||||
|
||||
class FileNotifyInformation(object):
|
||||
"""
|
||||
I represent a buffer containing FILE_NOTIFY_INFORMATION structures, and can
|
||||
iterate over those structures, decoding them into Event objects.
|
||||
"""
|
||||
|
||||
def __init__(self, size=1024):
|
||||
self.size = size
|
||||
self.buffer = create_string_buffer(size)
|
||||
address = addressof(self.buffer)
|
||||
_assert(address & 3 == 0, "address 0x%X returned by create_string_buffer is not DWORD-aligned" % (address,))
|
||||
self.data = None
|
||||
|
||||
def read_changes(self, hDirectory, recursive, filter):
|
||||
bytes_returned = DWORD(0)
|
||||
r = ReadDirectoryChangesW(hDirectory,
|
||||
self.buffer,
|
||||
self.size,
|
||||
recursive,
|
||||
filter,
|
||||
byref(bytes_returned),
|
||||
None, # NULL -> no overlapped I/O
|
||||
None # NULL -> no completion routine
|
||||
)
|
||||
if r == 0:
|
||||
self.data = None
|
||||
raise WinError(get_last_error())
|
||||
self.data = self.buffer.raw[:bytes_returned.value]
|
||||
|
||||
def __iter__(self):
|
||||
# Iterator implemented as generator: <http://docs.python.org/library/stdtypes.html#generator-types>
|
||||
if self.data is None:
|
||||
return
|
||||
pos = 0
|
||||
while True:
|
||||
bytes = self._read_dword(pos+8)
|
||||
s = Event(self._read_dword(pos+4),
|
||||
self.data[pos+12 : pos+12+bytes].decode('utf-16-le'))
|
||||
Message.log(message_type="fni", info=repr(s))
|
||||
|
||||
next_entry_offset = self._read_dword(pos)
|
||||
yield s
|
||||
if next_entry_offset == 0:
|
||||
break
|
||||
pos = pos + next_entry_offset
|
||||
|
||||
def _read_dword(self, i):
|
||||
# little-endian
|
||||
return ( ord(self.data[i]) |
|
||||
(ord(self.data[i+1]) << 8) |
|
||||
(ord(self.data[i+2]) << 16) |
|
||||
(ord(self.data[i+3]) << 24))
|
||||
|
||||
|
||||
def _open_directory(path_u):
|
||||
hDirectory = CreateFileW(path_u,
|
||||
FILE_LIST_DIRECTORY, # access rights
|
||||
FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
|
||||
# don't prevent other processes from accessing
|
||||
None, # no security descriptor
|
||||
OPEN_EXISTING, # directory must already exist
|
||||
FILE_FLAG_BACKUP_SEMANTICS, # necessary to open a directory
|
||||
None # no template file
|
||||
)
|
||||
if hDirectory == INVALID_HANDLE_VALUE:
|
||||
e = WinError(get_last_error())
|
||||
raise OSError("Opening directory %s gave WinError: %s" % (quote_output(path_u), e))
|
||||
return hDirectory
|
||||
|
||||
|
||||
def simple_test():
|
||||
path_u = u"test"
|
||||
filter = FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_LAST_WRITE
|
||||
recursive = TRUE
|
||||
|
||||
hDirectory = _open_directory(path_u)
|
||||
fni = FileNotifyInformation()
|
||||
print("Waiting...")
|
||||
while True:
|
||||
fni.read_changes(hDirectory, recursive, filter)
|
||||
print(repr(fni.data))
|
||||
for info in fni:
|
||||
print(info)
|
||||
|
||||
def medium_test():
|
||||
from twisted.python.filepath import FilePath
|
||||
|
||||
def print_(*event):
|
||||
print(event)
|
||||
|
||||
notifier = INotify()
|
||||
notifier.set_pending_delay(1.0)
|
||||
IN_EXCL_UNLINK = long(0x04000000)
|
||||
mask = ( IN_CREATE
|
||||
| IN_CLOSE_WRITE
|
||||
| IN_MOVED_TO
|
||||
| IN_MOVED_FROM
|
||||
| IN_DELETE
|
||||
| IN_ONLYDIR
|
||||
| IN_EXCL_UNLINK
|
||||
)
|
||||
notifier.watch(FilePath(u"test"), mask, callbacks=[print_], recursive=True)
|
||||
notifier.startReading()
|
||||
reactor.run()
|
||||
|
||||
|
||||
NOT_STARTED = "NOT_STARTED"
|
||||
STARTED = "STARTED"
|
||||
STOPPING = "STOPPING"
|
||||
STOPPED = "STOPPED"
|
||||
|
||||
class INotify(PollMixin):
|
||||
def __init__(self):
|
||||
self._state = NOT_STARTED
|
||||
self._filter = None
|
||||
self._callbacks = None
|
||||
self._hDirectory = None
|
||||
self._path = None
|
||||
self._pending = set()
|
||||
self._pending_delay = 1.0
|
||||
self._pending_call = None
|
||||
self.recursive_includes_new_subdirectories = True
|
||||
|
||||
def set_pending_delay(self, delay):
|
||||
self._pending_delay = delay
|
||||
|
||||
def startReading(self):
|
||||
deferToThread(self._thread)
|
||||
return self.poll(lambda: self._state != NOT_STARTED)
|
||||
|
||||
def stopReading(self):
|
||||
# FIXME race conditions
|
||||
if self._state != STOPPED:
|
||||
self._state = STOPPING
|
||||
if self._pending_call:
|
||||
self._pending_call.cancel()
|
||||
self._pending_call = None
|
||||
|
||||
def wait_until_stopped(self):
|
||||
try:
|
||||
fileutil.write(os.path.join(self._path.path, u".ignore-me"), "")
|
||||
except IOError:
|
||||
pass
|
||||
return self.poll(lambda: self._state == STOPPED)
|
||||
|
||||
def watch(self, path, mask=IN_WATCH_MASK, autoAdd=False, callbacks=None, recursive=False):
|
||||
precondition(self._state == NOT_STARTED, "watch() can only be called before startReading()", state=self._state)
|
||||
precondition(self._filter is None, "only one watch is supported")
|
||||
precondition(isinstance(autoAdd, bool), autoAdd=autoAdd)
|
||||
precondition(isinstance(recursive, bool), recursive=recursive)
|
||||
#precondition(autoAdd == recursive, "need autoAdd and recursive to be the same", autoAdd=autoAdd, recursive=recursive)
|
||||
|
||||
self._path = path
|
||||
path_u = path.path
|
||||
if not isinstance(path_u, unicode):
|
||||
path_u = path_u.decode(sys.getfilesystemencoding())
|
||||
_assert(isinstance(path_u, unicode), path_u=path_u)
|
||||
|
||||
self._filter = FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_LAST_WRITE
|
||||
|
||||
if mask & (IN_ACCESS | IN_CLOSE_NOWRITE | IN_OPEN):
|
||||
self._filter = self._filter | FILE_NOTIFY_CHANGE_LAST_ACCESS
|
||||
if mask & IN_ATTRIB:
|
||||
self._filter = self._filter | FILE_NOTIFY_CHANGE_ATTRIBUTES | FILE_NOTIFY_CHANGE_SECURITY
|
||||
|
||||
self._recursive = TRUE if recursive else FALSE
|
||||
self._callbacks = callbacks or []
|
||||
self._hDirectory = _open_directory(path_u)
|
||||
|
||||
def _thread(self):
|
||||
try:
|
||||
_assert(self._filter is not None, "no watch set")
|
||||
|
||||
# To call Twisted or Tahoe APIs, use reactor.callFromThread as described in
|
||||
# <http://twistedmatrix.com/documents/current/core/howto/threading.html>.
|
||||
|
||||
fni = FileNotifyInformation()
|
||||
|
||||
while True:
|
||||
self._state = STARTED
|
||||
action = start_action(
|
||||
action_type=u"read-changes",
|
||||
directory=self._path.path,
|
||||
recursive=self._recursive,
|
||||
filter=self._filter,
|
||||
)
|
||||
try:
|
||||
with action:
|
||||
fni.read_changes(self._hDirectory, self._recursive, self._filter)
|
||||
except WindowsError as e:
|
||||
self._state = STOPPING
|
||||
|
||||
if self._check_stop():
|
||||
return
|
||||
for info in fni:
|
||||
path = self._path.preauthChild(info.filename) # FilePath with Unicode path
|
||||
if info.action == FILE_ACTION_MODIFIED and path.isdir():
|
||||
Message.log(
|
||||
message_type=u"filtering-out",
|
||||
info=repr(info),
|
||||
)
|
||||
continue
|
||||
else:
|
||||
Message.log(
|
||||
message_type=u"processing",
|
||||
info=repr(info),
|
||||
)
|
||||
#mask = _action_to_inotify_mask.get(info.action, IN_CHANGED)
|
||||
|
||||
@log_call(
|
||||
action_type=MAYBE_NOTIFY.action_type,
|
||||
include_args=[],
|
||||
include_result=False,
|
||||
)
|
||||
def _do_pending_calls():
|
||||
event_mask = IN_CHANGED
|
||||
self._pending_call = None
|
||||
for path1 in self._pending:
|
||||
if self._callbacks:
|
||||
for cb in self._callbacks:
|
||||
try:
|
||||
with CALLBACK(inotify_events=event_mask):
|
||||
cb(None, path1, event_mask)
|
||||
except Exception as e2:
|
||||
log.err(e2)
|
||||
self._pending = set()
|
||||
|
||||
def _maybe_notify(path2):
|
||||
if path2 not in self._pending:
|
||||
self._pending.add(path2)
|
||||
if self._state not in [STOPPING, STOPPED]:
|
||||
_do_pending_calls()
|
||||
# if self._pending_call is None and self._state not in [STOPPING, STOPPED]:
|
||||
# self._pending_call = reactor.callLater(self._pending_delay, _do_pending_calls)
|
||||
|
||||
reactor.callFromThread(_maybe_notify, path)
|
||||
if self._check_stop():
|
||||
return
|
||||
except Exception as e:
|
||||
log.err(e)
|
||||
self._state = STOPPED
|
||||
raise
|
||||
|
||||
def _check_stop(self):
|
||||
if self._state == STOPPING:
|
||||
hDirectory = self._hDirectory
|
||||
self._callbacks = None
|
||||
self._hDirectory = None
|
||||
CloseHandle(hDirectory)
|
||||
self._state = STOPPED
|
||||
if self._pending_call:
|
||||
self._pending_call.cancel()
|
||||
self._pending_call = None
|
||||
|
||||
return self._state == STOPPED
|
Loading…
x
Reference in New Issue
Block a user