Merge 'pr295': add magic-folders to replace drop-upload

* Closes tahoe-lafs#295 (in rebased form)
* refs ticket:2438
This commit is contained in:
Brian Warner 2016-07-21 14:22:33 -07:00
commit d6d264f31c
36 changed files with 5389 additions and 629 deletions

2
.gitignore vendored
View File

@ -27,7 +27,6 @@ zope.interface-*.egg
/tmp*
/*.patch
/dist/
/twisted/plugins/dropin.cache
/tahoe-deps/
/tahoe-deps.tar.gz
/.coverage
@ -38,3 +37,4 @@ zope.interface-*.egg
/.tox/
/docs/_build/
/coverage.xml
/smoke_magicfolder/

View File

@ -40,6 +40,11 @@ upload-osx-pkg:
echo not uploading tahoe-lafs-osx-pkg because this is not trunk but is branch \"${BB_BRANCH}\" ; \
fi
.PHONY: smoketest
smoketest:
-python ./src/allmydata/test/check_magicfolder_smoke.py kill
-rm -rf smoke_magicfolder/
python ./src/allmydata/test/check_magicfolder_smoke.py
# code coverage-based testing is disabled temporarily, as we switch to tox.
# This will eventually be added to a tox environment. The following comments

View File

@ -76,7 +76,7 @@ Client/server nodes provide one or more of the following services:
* web-API service
* SFTP service
* FTP service
* drop-upload service
* Magic Folder service
* helper service
* storage service.
@ -459,11 +459,11 @@ SFTP, FTP
for instructions on configuring these services, and the ``[sftpd]`` and
``[ftpd]`` sections of ``tahoe.cfg``.
Drop-Upload
Magic Folder
As of Tahoe-LAFS v1.9.0, a node running on Linux can be configured to
automatically upload files that are created or changed in a specified
local directory. See :doc:`frontends/drop-upload` for details.
A node running on Linux or Windows can be configured to automatically
upload files that are created or changed in a specified local directory.
See :doc:`frontends/magic-folder` for details.
Storage Server Configuration

View File

@ -1,154 +0,0 @@
.. -*- coding: utf-8-with-signature -*-
===============================
Tahoe-LAFS Drop-Upload Frontend
===============================
1. `Introduction`_
2. `Configuration`_
3. `Known Issues and Limitations`_
Introduction
============
The drop-upload frontend allows an upload to a Tahoe-LAFS grid to be triggered
automatically whenever a file is created or changed in a specific local
directory. This is a preview of a feature that we expect to support across
several platforms, but it currently works only on Linux.
The implementation was written as a prototype at the First International
Tahoe-LAFS Summit in June 2011, and is not currently in as mature a state as
the other frontends (web, CLI, SFTP and FTP). This means that you probably
should not keep important data in the upload directory, and should not rely
on all changes to files in the local directory to result in successful uploads.
There might be (and have been) incompatible changes to how the feature is
configured. There is even the possibility that it may be abandoned, for
example if unsolveable reliability issues are found.
We are very interested in feedback on how well this feature works for you, and
suggestions to improve its usability, functionality, and reliability.
Configuration
=============
The drop-upload frontend runs as part of a gateway node. To set it up, you
need to choose the local directory to monitor for file changes, and a mutable
directory on the grid to which files will be uploaded.
These settings are configured in the ``[drop_upload]`` section of the
gateway's ``tahoe.cfg`` file.
``[drop_upload]``
``enabled = (boolean, optional)``
If this is ``True``, drop-upload will be enabled. The default value is
``False``.
``local.directory = (UTF-8 path)``
This specifies the local directory to be monitored for new or changed
files. If the path contains non-ASCII characters, it should be encoded
in UTF-8 regardless of the system's filesystem encoding. Relative paths
will be interpreted starting from the node's base directory.
In addition, the file ``private/drop_upload_dircap`` must contain a
writecap pointing to an existing mutable directory to be used as the target
of uploads. It will start with ``URI:DIR2:``, and cannot include an alias
or path.
After setting the above fields and starting or restarting the gateway,
you can confirm that the feature is working by copying a file into the
local directory. Then, use the WUI or CLI to check that it has appeared
in the upload directory with the same filename. A large file may take some
time to appear, since it is only linked into the directory after the upload
has completed.
The 'Operational Statistics' page linked from the Welcome page shows counts
of the number of files uploaded, the number of change events currently
queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
page and the node :doc:`log<../logging>` may be helpful to determine the
cause of any failures.
Known Issues and Limitations
============================
This frontend only works on Linux. There is an even-more-experimental
implementation for Windows (`#1431`_), and a ticket to add support for
Mac OS X and BSD-based systems (`#1432`_).
Subdirectories of the local directory are not monitored. If a subdirectory
is created, it will be ignored. (`#1433`_)
If files are created or changed in the local directory just after the gateway
has started, it might not have connected to a sufficient number of servers
when the upload is attempted, causing the upload to fail. (`#1449`_)
Files that were created or changed in the local directory while the gateway
was not running, will not be uploaded. (`#1458`_)
The only way to determine whether uploads have failed is to look at the
'Operational Statistics' page linked from the Welcome page. This only shows
a count of failures, not the names of files. Uploads are never retried.
The drop-upload frontend performs its uploads sequentially (i.e. it waits
until each upload is finished before starting the next), even when there
would be enough memory and bandwidth to efficiently perform them in parallel.
A drop-upload can occur in parallel with an upload by a different frontend,
though. (`#1459`_)
If there are a large number of near-simultaneous file creation or
change events (greater than the number specified in the file
``/proc/sys/fs/inotify/max_queued_events``), it is possible that some events
could be missed. This is fairly unlikely under normal circumstances, because
the default value of ``max_queued_events`` in most Linux distributions is
16384, and events are removed from this queue immediately without waiting for
the corresponding upload to complete. (`#1430`_)
Some filesystems may not support the necessary change notifications.
So, it is recommended for the local directory to be on a directly attached
disk-based filesystem, not a network filesystem or one provided by a virtual
machine.
Attempts to read the mutable directory at about the same time as an uploaded
file is being linked into it, might fail, even if they are done through the
same gateway. (`#1105`_)
When a local file is changed and closed several times in quick succession,
it may be uploaded more times than necessary to keep the remote copy
up-to-date. (`#1440`_)
Files deleted from the local directory will not be unlinked from the upload
directory. (`#1710`_)
The ``private/drop_upload_dircap`` file cannot use an alias or path to
specify the upload directory. (`#1711`_)
Files are always uploaded as immutable. If there is an existing mutable file
of the same name in the upload directory, it will be unlinked and replaced
with an immutable file. (`#1712`_)
If a file in the upload directory is changed (actually relinked to a new
file), then the old file is still present on the grid, and any other caps to
it will remain valid. See :doc:`../garbage-collection` for how to reclaim
the space used by files that are no longer needed.
Unicode names are supported, but the local name of a file must be encoded
correctly in order for it to be uploaded. The expected encoding is that
printed by ``python -c "import sys; print sys.getfilesystemencoding()"``.
.. _`#1105`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1105
.. _`#1430`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1430
.. _`#1431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1431
.. _`#1432`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1432
.. _`#1433`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1433
.. _`#1440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1440
.. _`#1449`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1449
.. _`#1458`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1458
.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
.. _`#1710`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1710
.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
.. _`#1712`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1712

View File

@ -0,0 +1,148 @@
.. -*- coding: utf-8-with-signature -*-
================================
Tahoe-LAFS Magic Folder Frontend
================================
1. `Introduction`_
2. `Configuration`_
3. `Known Issues and Limitations`_
Introduction
============
The Magic Folder frontend synchronizes local directories on two or more
clients, using a Tahoe-LAFS grid for storage. Whenever a file is created
or changed under the local directory of one of the clients, the change is
propagated to the grid and then to the other clients.
The implementation of the "drop-upload" frontend, on which Magic Folder is
based, was written as a prototype at the First International Tahoe-LAFS
Summit in June 2011. In 2015, with the support of a grant from the
`Open Technology Fund`_, it was redesigned and extended to support
synchronization between clients. It currently works on Linux and Windows.
Magic Folder is not currently in as mature a state as the other frontends
(web, CLI, SFTP and FTP). This means that you probably should not rely on
all changes to files in the local directory to result in successful uploads.
There might be (and have been) incompatible changes to how the feature is
configured.
We are very interested in feedback on how well this feature works for you, and
suggestions to improve its usability, functionality, and reliability.
.. _`Open Technology Fund`: https://www.opentech.fund/
Configuration
=============
The Magic Folder frontend runs as part of a gateway node. To set it up, you
must use the tahoe magic-folder CLI. For detailed information see our
`Magic-Folder CLI design documentation`_. For a given Magic-Folder collective
directory you need to run the ``tahoe magic-folder create`` command. After
that the ``tahoe magic-folder invite`` command must used to generate an
*invite code* for each member of the magic-folder collective. A confidential,
authenticated communications channel should be used to transmit the invite code
to each member, who will be joining using the ``tahoe magic-folder join``
command.
These settings are persisted in the ``[magic_folder]`` section of the
gateway's ``tahoe.cfg`` file.
``[magic_folder]``
``enabled = (boolean, optional)``
If this is ``True``, Magic Folder will be enabled. The default value is
``False``.
``local.directory = (UTF-8 path)``
This specifies the local directory to be monitored for new or changed
files. If the path contains non-ASCII characters, it should be encoded
in UTF-8 regardless of the system's filesystem encoding. Relative paths
will be interpreted starting from the node's base directory.
You should not normally need to set these fields manually because they are
set by the ``tahoe magic-folder create`` and/or ``tahoe magic-folder join``
commands. Use the ``--help`` option to these commands for more information.
After setting up a Magic Folder collective and starting or restarting each
gateway, you can confirm that the feature is working by copying a file into
any local directory, and checking that it appears on other clients.
Large files may take some time to appear.
The 'Operational Statistics' page linked from the Welcome page shows counts
of the number of files uploaded, the number of change events currently
queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
page and the node :doc:`log<../logging>` may be helpful to determine the
cause of any failures.
Known Issues and Limitations
============================
This feature only works on Linux and Windows. There is a ticket to add
support for Mac OS X and BSD-based systems (`#1432`_).
The only way to determine whether uploads have failed is to look at the
'Operational Statistics' page linked from the Welcome page. This only shows
a count of failures, not the names of files. Uploads are never retried.
The Magic Folder frontend performs its uploads sequentially (i.e. it waits
until each upload is finished before starting the next), even when there
would be enough memory and bandwidth to efficiently perform them in parallel.
A Magic Folder upload can occur in parallel with an upload by a different
frontend, though. (`#1459`_)
On Linux, if there are a large number of near-simultaneous file creation or
change events (greater than the number specified in the file
``/proc/sys/fs/inotify/max_queued_events``), it is possible that some events
could be missed. This is fairly unlikely under normal circumstances, because
the default value of ``max_queued_events`` in most Linux distributions is
16384, and events are removed from this queue immediately without waiting for
the corresponding upload to complete. (`#1430`_)
The Windows implementation might also occasionally miss file creation or
change events, due to limitations of the underlying Windows API
(ReadDirectoryChangesW). We do not know how likely or unlikely this is.
(`#1431`_)
Some filesystems may not support the necessary change notifications.
So, it is recommended for the local directory to be on a directly attached
disk-based filesystem, not a network filesystem or one provided by a virtual
machine.
The ``private/magic_folder_dircap`` and ``private/collective_dircap`` files
cannot use an alias or path to specify the upload directory. (`#1711`_)
If a file in the upload directory is changed (actually relinked to a new
file), then the old file is still present on the grid, and any other caps
to it will remain valid. Eventually it will be possible to use
:doc:`../garbage-collection` to reclaim the space used by these files; however
currently they are retained indefinitely. (`#2440`_)
Unicode filenames are supported on both Linux and Windows, but on Linux, the
local name of a file must be encoded correctly in order for it to be uploaded.
The expected encoding is that printed by
``python -c "import sys; print sys.getfilesystemencoding()"``.
On Windows, local directories with non-ASCII names are not currently working.
(`#2219`_)
On Windows, when a node has Magic Folder enabled, it is unresponsive to Ctrl-C
(it can only be killed using Task Manager or similar). (`#2218`_)
.. _`#1430`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1430
.. _`#1431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1431
.. _`#1432`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1432
.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
.. _`#2218`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2218
.. _`#2219`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2219
.. _`#2440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2440
.. _`garbage collection`: ../garbage-collection.rst
.. _`Magic-Folder CLI design documentation`: ../proposed/magic-folder/user-interface-design.rst

231
docs/magic-folder-howto.rst Normal file
View File

@ -0,0 +1,231 @@
=========================
Magic Folder Set-up Howto
=========================
1. `This document`_
2. `Preparation`_
3. `Setting up a local test grid`_
4. `Setting up Magic Folder`_
5. `Testing`_
This document
=============
This is preliminary documentation of how to set up the
Magic Folder pre-release using a test grid on a single Linux
or Windows machine, with two clients and one server. It is
aimed at a fairly technical audience.
For an introduction to Magic Folder and how to configure it
more generally, see `docs/frontends/magic-folder.rst`_.
It it possible to adapt these instructions to run the nodes on
different machines, to synchronize between three or more clients,
to mix Windows and Linux clients, and to use multiple servers
(if the Tahoe-LAFS encoding parameters are changed).
.. _`docs/frontends/magic-folder.rst`: ../docs/frontends/magic-folder.rst
Preparation
===========
Linux
-----
Install ``git`` from your distribution's package manager.
Then run these commands::
git clone -b 2438.magic-folder-stable.8 https://github.com/tahoe-lafs/tahoe-lafs.git
cd tahoe-lafs
python setup.py test
The test suite usually takes about 15 minutes to run.
Note that it is normal for some tests to be skipped.
In the current branch, the Magic Folder tests produce
considerable debugging output.
If you see an error like ``fatal error: Python.h: No such file or directory``
while compiling the dependencies, you need the Python development headers. If
you are on a Debian or Ubuntu system, you can install them with ``sudo
apt-get install python-dev``. On RedHat/Fedora, install ``python-devel``.
Windows
-------
Windows 7 or above is required.
For 64-bit Windows:
* Install Python 2.7 from
https://www.python.org/ftp/python/2.7/python-2.7.amd64.msi
* Install pywin32 from
https://tahoe-lafs.org/source/tahoe-lafs/deps/windows/pywin32-219.win-amd64-py2.7.exe
* Install git from
https://github.com/git-for-windows/git/releases/download/v2.6.2.windows.1/Git-2.6.2-64-bit.exe
For 32-bit Windows:
* Install Python 2.7 from
https://www.python.org/ftp/python/2.7/python-2.7.msi
* Install pywin32 from
https://tahoe-lafs.org/source/tahoe-lafs/deps/windows/pywin32-219.win32-py2.7.exe
* Install git from
https://github.com/git-for-windows/git/releases/download/v2.6.2.windows.1/Git-2.6.2-32-bit.exe
Then (for any version) run these commands in a Command Prompt::
git clone -b 2438.magic-folder-stable.5 https://github.com/tahoe-lafs/tahoe-lafs.git
cd tahoe-lafs
python setup.py build
Open a new Command Prompt with the same current directory,
then run::
bin\tahoe --version-and-path
It is normal for this command to print warnings and debugging output
on some systems. ``python setup.py test`` can also be run, but there
are some known sources of nondeterministic errors in tests on Windows
that are unrelated to Magic Folder.
Setting up a local test grid
============================
Linux
-----
Run these commands::
mkdir ../grid
bin/tahoe create-introducer ../grid/introducer
bin/tahoe start ../grid/introducer
export FURL=`cat ../grid/introducer/private/introducer.furl`
bin/tahoe create-node --introducer="$FURL" ../grid/server
bin/tahoe create-client --introducer="$FURL" ../grid/alice
bin/tahoe create-client --introducer="$FURL" ../grid/bob
Windows
-------
Run::
mkdir ..\grid
bin\tahoe create-introducer ..\grid\introducer
bin\tahoe start ..\grid\introducer
Leave the introducer running in that Command Prompt,
and in a separate Command Prompt (with the same current
directory), run::
set /p FURL=<..\grid\introducer\private\introducer.furl
bin\tahoe create-node --introducer=%FURL% ..\grid\server
bin\tahoe create-client --introducer=%FURL% ..\grid\alice
bin\tahoe create-client --introducer=%FURL% ..\grid\bob
Both Linux and Windows
----------------------
(Replace ``/`` with ``\`` for Windows paths.)
Edit ``../grid/alice/tahoe.cfg``, and make the following
changes to the ``[node]`` and ``[client]`` sections::
[node]
nickname = alice
web.port = tcp:3457:interface=127.0.0.1
[client]
shares.needed = 1
shares.happy = 1
shares.total = 1
Edit ``../grid/bob/tahoe.cfg``, and make the following
change to the ``[node]`` section, and the same change as
above to the ``[client]`` section::
[node]
nickname = bob
web.port = tcp:3458:interface=127.0.0.1
Note that when running nodes on a single machine,
unique port numbers must be used for each node (and they
must not clash with ports used by other server software).
Here we have used the default of 3456 for the server,
3457 for alice, and 3458 for bob.
Now start all of the nodes (the introducer should still be
running from above)::
bin/tahoe start ../grid/server
bin/tahoe start ../grid/alice
bin/tahoe start ../grid/bob
On Windows, a separate Command Prompt is needed to run each
node.
Open a web browser on http://127.0.0.1:3457/ and verify that
alice is connected to the introducer and one storage server.
Then do the same for http://127.0.0.1:3568/ to verify that
bob is connected. Leave all of the nodes running for the
next stage.
Setting up Magic Folder
=======================
Linux
-----
Run::
mkdir -p ../local/alice ../local/bob
bin/tahoe -d ../grid/alice magic-folder create magic: alice ../local/alice
bin/tahoe -d ../grid/alice magic-folder invite magic: bob >invitecode
export INVITECODE=`cat invitecode`
bin/tahoe -d ../grid/bob magic-folder join "$INVITECODE" ../local/bob
bin/tahoe restart ../grid/alice
bin/tahoe restart ../grid/bob
Windows
-------
Run::
mkdir ..\local\alice ..\local\bob
bin\tahoe -d ..\grid\alice magic-folder create magic: alice ..\local\alice
bin\tahoe -d ..\grid\alice magic-folder invite magic: bob >invitecode
set /p INVITECODE=<invitecode
bin\tahoe -d ..\grid\bob magic-folder join %INVITECODE% ..\local\bob
Then close the Command Prompt windows that are running the alice and bob
nodes, and open two new ones in which to run::
bin\tahoe start ..\grid\alice
bin\tahoe start ..\grid\bob
Testing
=======
You can now experiment with creating files and directories in
``../local/alice`` and ``/local/bob``; any changes should be
propagated to the other directory.
Note that when a file is deleted, the corresponding file in the
other directory will be renamed to a filename ending in ``.backup``.
Deleting a directory will have no effect.
For other known issues and limitations, see
https://github.com/tahoe-lafs/tahoe-lafs/blob/2438.magic-folder-stable.8/docs/frontends/magic-folder.rst#known-issues-and-limitations
As mentioned earlier, it is also possible to run the nodes on
different machines, to synchronize between three or more clients,
to mix Windows and Linux clients, and to use multiple servers
(if the Tahoe-LAFS encoding parameters are changed).

View File

@ -0,0 +1,375 @@
Multi-party Conflict Detection
==============================
The current Magic-Folder remote conflict detection design does not properly detect remote conflicts
for groups of three or more parties. This design is specified in the "Fire Dragon" section of this document:
https://github.com/tahoe-lafs/tahoe-lafs/blob/2551.wip.2/docs/proposed/magic-folder/remote-to-local-sync.rst#fire-dragons-distinguishing-conflicts-from-overwrites
This Tahoe-LAFS trac ticket comment outlines a scenario with
three parties in which a remote conflict is falsely detected:
.. _`ticket comment`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2551#comment:22
Summary and definitions
=======================
Abstract file: a file being shared by a Magic Folder.
Local file: a file in a client's local filesystem corresponding to an abstract file.
Relative path: the path of an abstract or local file relative to the Magic Folder root.
Version: a snapshot of an abstract file, with associated metadata, that is uploaded by a Magic Folder client.
A version is associated with the file's relative path, its contents, and
mtime and ctime timestamps. Versions also have a unique identity.
Follows relation:
* If and only if a change to a client's local file at relative path F that results in an upload of version V',
was made when the client already had version V of that file, then we say that V' directly follows V.
* The follows relation is the irreflexive transitive closure of the "directly follows" relation.
The follows relation is transitive and acyclic, and therefore defines a DAG called the
Version DAG. Different abstract files correspond to disconnected sets of nodes in the Version DAG
(in other words there are no "follows" relations between different files).
The DAG is only ever extended, not mutated.
The desired behaviour for initially classifying overwrites and conflicts is as follows:
* if a client Bob currently has version V of a file at relative path F, and it sees a new version V'
of that file in another client Alice's DMD, such that V' follows V, then the write of the new version
is initially an overwrite and should be to the same filename.
* if, in the same situation, V' does not follow V, then the write of the new version should be
classified as a conflict.
The existing `Magic Folder design for remote-to-local sync`_ defines when an initial overwrite
should be reclassified as a conflict.
The above definitions completely specify the desired solution of the false
conflict behaviour described in the `ticket comment`_. However, they do not give
a concrete algorithm to compute the follows relation, or a representation in the
Tahoe-LAFS file store of the metadata needed to compute it.
We will consider two alternative designs, proposed by Leif Ryge and
Zooko Wilcox-O'Hearn, that aim to fill this gap.
.. _`Magic Folder design for remote-to-local sync`: remote-to-local-sync.rst
Leif's Proposal: Magic-Folder "single-file" snapshot design
===========================================================
Abstract
--------
We propose a relatively simple modification to the initial Magic Folder design which
adds merkle DAGs of immutable historical snapshots for each file. The full history
does not necessarily need to be retained, and the choice of how much history to retain
can potentially be made on a per-file basis.
Motivation:
-----------
no SPOFs, no admins
```````````````````
Additionally, the initial design had two cases of excess authority:
1. The magic folder administrator (inviter) has everyone's write-caps and is thus essentially "root"
2. Each client shares ambient authority and can delete anything or everything and
(assuming there is not a conflict) the data will be deleted from all clients. So, each client
is effectively "root" too.
Thus, while it is useful for file synchronization, the initial design is a much less safe place
to store data than in a single mutable tahoe directory (because more client computers have the
possibility to delete it).
Glossary
--------
- merkle DAG: like a merkle tree but with multiple roots, and with each node potentially having multiple parents
- magic folder: a logical directory that can be synchronized between many clients
(devices, users, ...) using a Tahoe-LAFS storage grid
- client: a Magic-Folder-enabled Tahoe-LAFS client instance that has access to a magic folder
- DMD: "distributed mutable directory", a physical Tahoe-LAFS mutable directory.
Each client has the write cap to their own DMD, and read caps to all other client's DMDs
(as in the original Magic Folder design).
- snapshot: a reference to a version of a file; represented as an immutable directory containing
an entry called "content" (pointing to the immutable file containing the file's contents),
and an entry called "parent0" (pointing to a parent snapshot), and optionally parent1 through
parentN pointing at other parents. The Magic Folder snapshot object is conceptually very similar
to a git commit object, except for that it is created automatically and it records the history of an
individual file rather than an entire repository. Also, commits do not need to have authors
(although an author field could be easily added later).
- deletion snapshot: immutable directory containing no content entry (only one or more parents)
- capability: a Tahoe-LAFS diminishable cryptographic capability
- cap: short for capability
- conflict: the situation when another client's current snapshot for a file is different than our current snapshot, and is not a descendant of ours.
- overwrite: the situation when another client's current snapshot for a file is a (not necessarily direct) descendant of our current snapshot.
Overview
--------
This new design will track the history of each file using "snapshots" which are
created at each upload. Each snapshot will specify one or more parent snapshots,
forming a directed acyclic graph. A Magic-Folder user's DMD uses a flattened directory
hierarchy naming scheme, as in the original design. But, instead of pointing directly
at file contents, each file name will link to that user's latest snapshot for that file.
Inside the dmd there will also be an immutable directory containing the client's subscriptions
(read-caps to other clients' dmds).
Clients periodically poll each other's DMDs. When they see the current snapshot for a file is
different than their own current snapshot for that file, they immediately begin downloading its
contents and then walk backwards through the DAG from the new snapshot until they find their own
snapshot or a common ancestor.
For the common ancestor search to be efficient, the client will need to keep a local store (in the magic folder db) of all of the snapshots
(but not their contents) between the oldest current snapshot of any of their subscriptions and their own current snapshot.
See "local cache purging policy" below for more details.
If the new snapshot is a descendant of the client's existing snapshot, then this update
is an "overwrite" - like a git fast-forward. So, when the download of the new file completes it can overwrite
the existing local file with the new contents and update its dmd to point at the new snapshot.
If the new snapshot is not a descendant of the client's current snapshot, then the update is a
conflict. The new file is downloaded and named $filename.conflict-$user1,$user2 (including a list
of other subscriptions who have that version as their current version).
Changes to the local .conflict- file are not tracked. When that file disappears
(either by deletion, or being renamed) a new snapshot for the conflicting file is
created which has two parents - the client's snapshot prior to the conflict, and the
new conflicting snapshot. If multiple .conflict files are deleted or renamed in a short
period of time, a single conflict-resolving snapshot with more than two parents can be created.
! I think this behavior will confuse users.
Tahoe-LAFS snapshot objects
---------------------------
These Tahoe-LAFS snapshot objects only track the history of a single file, not a directory hierarchy.
Snapshot objects contain only two field types:
- ``Content``: an immutable capability of the file contents (omitted if deletion snapshot)
- ``Parent0..N``: immutable capabilities representing parent snapshots
Therefore in this system an interesting side effect of this Tahoe snapshot object is that there is no
snapshot author. The only notion of an identity in the Magic-Folder system is the write capability of the user's DMD.
The snapshot object is an immutable directory which looks like this:
content -> immutable cap to file content
parent0 -> immutable cap to a parent snapshot object
parent1..N -> more parent snapshots
Snapshot Author Identity
------------------------
Snapshot identity might become an important feature so that bad actors
can be recognized and other clients can stop "subscribing" to (polling for) updates from them.
Perhaps snapshots could be signed by the user's Magic-Folder write key for this purpose? Probably a bad idea to reuse the write-cap key for this. Better to introduce ed25519 identity keys which can (optionally) sign snapshot contents and store the signature as another member of the immutable directory.
Conflict Resolution
-------------------
detection of conflicts
``````````````````````
A Magic-Folder client updates a given file's current snapshot link to a snapshot which is a descendent
of the previous snapshot. For a given file, let's say "file1", Alice can detect that Bob's DMD has a "file1"
that links to a snapshot which conflicts. Two snapshots conflict if one is not an ancestor of the other.
a possible UI for resolving conflicts
`````````````````````````````````````
If Alice links a conflicting snapshot object for a file named "file1",
Bob and Carole will see a file in their Magic-Folder called "file1.conflicted.Alice".
Alice conversely will see an additional file called "file1.conflicted.previous".
If Alice wishes to resolve the conflict with her new version of the file then
she simply deletes the file called "file1.conflicted.previous". If she wants to
choose the other version then she moves it into place:
mv file1.conflicted.previous file1
This scheme works for N number of conflicts. Bob for instance could choose
the same resolution for the conflict, like this:
mv file1.Alice file1
Deletion propagation and eventual Garbage Collection
----------------------------------------------------
When a user deletes a file, this is represented by a link from their DMD file
object to a deletion snapshot. Eventually all users will link this deletion
snapshot into their DMD. When all users have the link then they locally cache
the deletion snapshot and remove the link to that file in their DMD.
Deletions can of course be undeleted; this means creating a new snapshot
object that specifies itself a descent of the deletion snapshot.
Clients periodically renew leases to all capabilities recursively linked
to in their DMD. Files which are unlinked by ALL the users of a
given Magic-Folder will eventually be garbage collected.
Lease expirey duration must be tuned properly by storage servers such that
Garbage Collection does not occur too frequently.
Performance Considerations
--------------------------
local changes
`````````````
Our old scheme requires two remote Tahoe-LAFS operations per local file modification:
1. upload new file contents (as an immutable file)
2. modify mutable directory (DMD) to link to the immutable file cap
Our new scheme requires three remote operations:
1. upload new file contents (as in immutable file)
2. upload immutable directory representing Tahoe-LAFS snapshot object
3. modify mutable directory (DMD) to link to the immutable snapshot object
remote changes
``````````````
Our old scheme requires one remote Tahoe-LAFS operation per remote file modification (not counting the polling of the dmd):
1. Download new file content
Our new scheme requires a minimum of two remote operations (not counting the polling of the dmd) for conflicting downloads, or three remote operations for overwrite downloads:
1. Download new snapshot object
2. Download the content it points to
3. If the download is an overwrite, modify the DMD to indicate that the downloaded version is their current version.
If the new snapshot is not a direct descendant of our current snapshot or the other party's previous snapshot we saw, we will also need to download more snapshots to determine if it is a conflict or an overwrite. However, those can be done in
parallel with the content download since we will need to download the content in either case.
While the old scheme is obviously more efficient, we think that the properties provided by the new scheme make it worth the additional cost.
Physical updates to the DMD overiouslly need to be serialized, so multiple logical updates should be combined when an update is already in progress.
conflict detection and local caching
````````````````````````````````````
Local caching of snapshots is important for performance.
We refer to the client's local snapshot cache as the ``magic-folder db``.
Conflict detection can be expensive because it may require the client
to download many snapshots from the other user's DMD in order to try
and find it's own current snapshot or a descendent. The cost of scanning
the remote DMDs should not be very high unless the client conducting the
scan has lots of history to download because of being offline for a long
time while many new snapshots were distributed.
local cache purging policy
``````````````````````````
The client's current snapshot for each file should be cached at all times.
When all clients' views of a file are synchronized (they all have the same
snapshot for that file), no ancestry for that file needs to be cached.
When clients' views of a file are *not* synchronized, the most recent
common ancestor of all clients' snapshots must be kept cached, as must
all intermediate snapshots.
Local Merge Property
--------------------
Bob can in fact, set a pre-existing directory (with files) as his new Magic-Folder directory, resulting
in a merge of the Magic-Folder with Bob's local directory. Filename collisions will result in conflicts
because Bob's new snapshots are not descendent's of the existing Magic-Folder file snapshots.
Example: simultaneous update with four parties:
1. A, B, C, D are in sync for file "foo" at snapshot X
2. A and B simultaneously change the file, creating snapshots XA and XB (both descendants of X).
3. C hears about XA first, and D hears about XB first. Both accept an overwrite.
4. All four parties hear about the other update they hadn't heard about yet.
5. Result:
- everyone's local file "foo" has the content pointed to by the snapshot in their DMD's "foo" entry
- A and C's DMDs each have the "foo" entry pointing at snapshot XA
- B and D's DMDs each have the "foo" entry pointing at snapshot XB
- A and C have a local file called foo.conflict-B,D with XB's content
- B and D have a local file called foo.conflict-A,C with XA's content
Later:
- Everyone ignores the conflict, and continue updating their local "foo". but slowly enough that there are no further conflicts, so that A and C remain in sync with eachother, and B and D remain in sync with eachother.
- A and C's foo.conflict-B,D file continues to be updated with the latest version of the file B and D are working on, and vice-versa.
- A and C edit the file at the same time again, causing a new conflict.
- Local files are now:
A: "foo", "foo.conflict-B,D", "foo.conflict-C"
C: "foo", "foo.conflict-B,D", "foo.conflict-A"
B and D: "foo", "foo.conflict-A", "foo.conflict-C"
- Finally, D decides to look at "foo.conflict-A" and "foo.conflict-C", and they manually integrate (or decide to ignore) the differences into their own local file "foo".
- D deletes their conflict files.
- D's DMD now points to a snapshot that is a descendant of everyone else's current snapshot, resolving all conflicts.
- The conflict files on A, B, and C disappear, and everyone's local file "foo" contains D's manually-merged content.
Daira: I think it is too complicated to include multiple nicknames in the .conflict files
(e.g. "foo.conflict-B,D"). It should be sufficient to have one file for each other client,
reflecting that client's latest version, regardless of who else it conflicts with.
Zooko's Design (as interpreted by Daira)
========================================
A version map is a mapping from client nickname to version number.
Definition: a version map M' strictly-follows a mapping M iff for every entry c->v
in M, there is an entry c->v' in M' such that v' > v.
Each client maintains a 'local version map' and a 'conflict version map' for each file
in its magic folder db.
If it has never written the file, then the entry for its own nickname in the local version
map is zero. The conflict version map only contains entries for nicknames B where
"$FILENAME.conflict-$B" exists.
When a client A uploads a file, it increments the version for its own nickname in its
local version map for the file, and includes that map as metadata with its upload.
A download by client A from client B is an overwrite iff the downloaded version map
strictly-follows A's local version map for that file; in this case A replaces its local
version map with the downloaded version map. Otherwise it is a conflict, and the
download is put into "$FILENAME.conflict-$B"; in this case A's
local version map remains unchanged, and the entry B->v taken from the downloaded
version map is added to its conflict version map.
If client A deletes or renames a conflict file "$FILENAME.conflict-$B", then A copies
the entry for B from its conflict version map to its local version map, deletes
the entry for B in its conflict version map, and performs another upload (with
incremented version number) of $FILENAME.
Example:
A, B, C = (10, 20, 30) everyone agrees.
A updates: (11, 20, 30)
B updates: (10, 21, 30)
C will see either A or B first. Both would be an overwrite, if considered alone.

View File

@ -144,7 +144,7 @@ class Client(node.Node, pollmixin.PollMixin):
self.init_helper()
self.init_ftp_server()
self.init_sftp_server()
self.init_drop_uploader()
self.init_magic_folder()
# If the node sees an exit_trigger file, it will poll every second to see
# whether the file still exists, and what its mtime is. If the file does not
@ -373,11 +373,6 @@ class Client(node.Node, pollmixin.PollMixin):
self.storage_broker = sb
sb.setServiceParent(self)
connection_threshold = min(self.encoding_params["k"],
self.encoding_params["happy"] + 1)
helper = storage_client.ConnectedEnough(sb, connection_threshold)
self.upload_ready_d = helper.when_connected_enough()
# utilize the loaded static server specifications
for key, server in self.connections_config['servers'].items():
eventually(self.storage_broker.got_static_announcement,
@ -472,25 +467,38 @@ class Client(node.Node, pollmixin.PollMixin):
sftp_portstr, pubkey_file, privkey_file)
s.setServiceParent(self)
def init_drop_uploader(self):
def init_magic_folder(self):
#print "init_magic_folder"
if self.get_config("drop_upload", "enabled", False, boolean=True):
if self.get_config("drop_upload", "upload.dircap", None):
raise OldConfigOptionError("The [drop_upload]upload.dircap option is no longer supported; please "
"put the cap in a 'private/drop_upload_dircap' file, and delete this option.")
raise OldConfigOptionError("The [drop_upload] section must be renamed to [magic_folder].\n"
"See docs/frontends/magic-folder.rst for more information.")
upload_dircap = self.get_or_create_private_config("drop_upload_dircap")
local_dir_utf8 = self.get_config("drop_upload", "local.directory")
if self.get_config("magic_folder", "enabled", False, boolean=True):
#print "magic folder enabled"
upload_dircap = self.get_private_config("magic_folder_dircap")
collective_dircap = self.get_private_config("collective_dircap")
try:
from allmydata.frontends import drop_upload
s = drop_upload.DropUploader(self, upload_dircap, local_dir_utf8)
s.setServiceParent(self)
s.startService()
local_dir_config = self.get_config("magic_folder", "local.directory").decode("utf-8")
local_dir = abspath_expanduser_unicode(local_dir_config, base=self.basedir)
# start processing the upload queue when we've connected to enough servers
self.upload_ready_d.addCallback(s.upload_ready)
except Exception, e:
self.log("couldn't start drop-uploader: %r", args=(e,))
dbfile = os.path.join(self.basedir, "private", "magicfolderdb.sqlite")
dbfile = abspath_expanduser_unicode(dbfile)
from allmydata.frontends import magic_folder
umask = self.get_config("magic_folder", "download.umask", 0077)
s = magic_folder.MagicFolder(self, upload_dircap, collective_dircap, local_dir, dbfile, umask)
self._magic_folder = s
s.setServiceParent(self)
s.startService()
# start processing the upload queue when we've connected to enough servers
connection_threshold = min(self.encoding_params["k"],
self.encoding_params["happy"] + 1)
connected = storage_client.ConnectedEnough(
self.storage_broker,
connection_threshold,
)
connected.when_connected_enough().addCallback(lambda ign: s.ready())
def _check_exit_trigger(self, exit_trigger_file):
if os.path.exists(exit_trigger_file):

View File

@ -1,132 +0,0 @@
import sys
from twisted.internet import defer
from twisted.python.filepath import FilePath
from twisted.application import service
from foolscap.api import eventually
from allmydata.interfaces import IDirectoryNode
from allmydata.util.encodingutil import quote_output, get_filesystem_encoding
from allmydata.util.fileutil import abspath_expanduser_unicode
from allmydata.immutable.upload import FileName
class DropUploader(service.MultiService):
name = 'drop-upload'
def __init__(self, client, upload_dircap, local_dir_utf8, inotify=None):
service.MultiService.__init__(self)
try:
local_dir_u = abspath_expanduser_unicode(local_dir_utf8.decode('utf-8'))
if sys.platform == "win32":
local_dir = local_dir_u
else:
local_dir = local_dir_u.encode(get_filesystem_encoding())
except (UnicodeEncodeError, UnicodeDecodeError):
raise AssertionError("The '[drop_upload] local.directory' parameter %s was not valid UTF-8 or "
"could not be represented in the filesystem encoding."
% quote_output(local_dir_utf8))
self._client = client
self._stats_provider = client.stats_provider
self._convergence = client.convergence
self._local_path = FilePath(local_dir)
self.is_upload_ready = False
if inotify is None:
from twisted.internet import inotify
self._inotify = inotify
if not self._local_path.exists():
raise AssertionError("The '[drop_upload] local.directory' parameter was %s but there is no directory at that location." % quote_output(local_dir_u))
if not self._local_path.isdir():
raise AssertionError("The '[drop_upload] local.directory' parameter was %s but the thing at that location is not a directory." % quote_output(local_dir_u))
# TODO: allow a path rather than a cap URI.
self._parent = self._client.create_node_from_uri(upload_dircap)
if not IDirectoryNode.providedBy(self._parent):
raise AssertionError("The URI in 'private/drop_upload_dircap' does not refer to a directory.")
if self._parent.is_unknown() or self._parent.is_readonly():
raise AssertionError("The URI in 'private/drop_upload_dircap' is not a writecap to a directory.")
self._uploaded_callback = lambda ign: None
self._notifier = inotify.INotify()
# We don't watch for IN_CREATE, because that would cause us to read and upload a
# possibly-incomplete file before the application has closed it. There should always
# be an IN_CLOSE_WRITE after an IN_CREATE (I think).
# TODO: what about IN_MOVE_SELF or IN_UNMOUNT?
mask = inotify.IN_CLOSE_WRITE | inotify.IN_MOVED_TO | inotify.IN_ONLYDIR
self._notifier.watch(self._local_path, mask=mask, callbacks=[self._notify])
def startService(self):
service.MultiService.startService(self)
d = self._notifier.startReading()
self._stats_provider.count('drop_upload.dirs_monitored', 1)
return d
def upload_ready(self):
"""upload_ready is used to signal us to start
processing the upload items...
"""
self.is_upload_ready = True
def _notify(self, opaque, path, events_mask):
self._log("inotify event %r, %r, %r\n" % (opaque, path, ', '.join(self._inotify.humanReadableMask(events_mask))))
self._stats_provider.count('drop_upload.files_queued', 1)
eventually(self._process, opaque, path, events_mask)
def _process(self, opaque, path, events_mask):
d = defer.succeed(None)
# FIXME: if this already exists as a mutable file, we replace the directory entry,
# but we should probably modify the file (as the SFTP frontend does).
def _add_file(ign):
name = path.basename()
# on Windows the name is already Unicode
if not isinstance(name, unicode):
name = name.decode(get_filesystem_encoding())
u = FileName(path.path, self._convergence)
return self._parent.add_file(name, u)
d.addCallback(_add_file)
def _succeeded(ign):
self._stats_provider.count('drop_upload.files_queued', -1)
self._stats_provider.count('drop_upload.files_uploaded', 1)
def _failed(f):
self._stats_provider.count('drop_upload.files_queued', -1)
if path.exists():
self._log("drop-upload: %r failed to upload due to %r" % (path.path, f))
self._stats_provider.count('drop_upload.files_failed', 1)
return f
else:
self._log("drop-upload: notified file %r disappeared "
"(this is normal for temporary files): %r" % (path.path, f))
self._stats_provider.count('drop_upload.files_disappeared', 1)
return None
d.addCallbacks(_succeeded, _failed)
d.addBoth(self._uploaded_callback)
return d
def set_uploaded_callback(self, callback):
"""This sets a function that will be called after a file has been uploaded."""
self._uploaded_callback = callback
def finish(self, for_tests=False):
self._notifier.stopReading()
self._stats_provider.count('drop_upload.dirs_monitored', -1)
if for_tests and hasattr(self._notifier, 'wait_until_stopped'):
return self._notifier.wait_until_stopped()
else:
return defer.succeed(None)
def _log(self, msg):
self._client.log(msg)
#open("events", "ab+").write(msg)

View File

@ -0,0 +1,975 @@
import sys, os
import os.path
from collections import deque
import time
from twisted.internet import defer, reactor, task
from twisted.python.failure import Failure
from twisted.python import runtime
from twisted.application import service
from zope.interface import Interface, Attribute, implementer
from allmydata.util import fileutil
from allmydata.interfaces import IDirectoryNode
from allmydata.util import log
from allmydata.util.fileutil import precondition_abspath, get_pathinfo, ConflictError
from allmydata.util.assertutil import precondition, _assert
from allmydata.util.deferredutil import HookMixin
from allmydata.util.progress import PercentProgress
from allmydata.util.encodingutil import listdir_filepath, to_filepath, \
extend_filepath, unicode_from_filepath, unicode_segments_from, \
quote_filepath, quote_local_unicode_path, quote_output, FilenameEncodingError
from allmydata.immutable.upload import FileName, Data
from allmydata import magicfolderdb, magicpath
defer.setDebugging(True)
IN_EXCL_UNLINK = 0x04000000L
def get_inotify_module():
try:
if sys.platform == "win32":
from allmydata.windows import inotify
elif runtime.platform.supportsINotify():
from twisted.internet import inotify
else:
raise NotImplementedError("filesystem notification needed for Magic Folder is not supported.\n"
"This currently requires Linux or Windows.")
return inotify
except (ImportError, AttributeError) as e:
log.msg(e)
if sys.platform == "win32":
raise NotImplementedError("filesystem notification needed for Magic Folder is not supported.\n"
"Windows support requires at least Vista, and has only been tested on Windows 7.")
raise
def is_new_file(pathinfo, db_entry):
if db_entry is None:
return True
if not pathinfo.exists and db_entry.size is None:
return False
return ((pathinfo.size, pathinfo.ctime_ns, pathinfo.mtime_ns) !=
(db_entry.size, db_entry.ctime_ns, db_entry.mtime_ns))
class MagicFolder(service.MultiService):
name = 'magic-folder'
def __init__(self, client, upload_dircap, collective_dircap, local_path_u, dbfile, umask,
pending_delay=1.0, clock=None):
precondition_abspath(local_path_u)
service.MultiService.__init__(self)
clock = clock or reactor
db = magicfolderdb.get_magicfolderdb(dbfile, create_version=(magicfolderdb.SCHEMA_v1, 1))
if db is None:
return Failure(Exception('ERROR: Unable to load magic folder db.'))
# for tests
self._client = client
self._db = db
upload_dirnode = self._client.create_node_from_uri(upload_dircap)
collective_dirnode = self._client.create_node_from_uri(collective_dircap)
self.uploader = Uploader(client, local_path_u, db, upload_dirnode, pending_delay, clock)
self.downloader = Downloader(client, local_path_u, db, collective_dirnode,
upload_dirnode.get_readonly_uri(), clock, self.uploader.is_pending, umask)
def startService(self):
# TODO: why is this being called more than once?
if self.running:
return defer.succeed(None)
service.MultiService.startService(self)
return self.uploader.start_monitoring()
def ready(self):
"""ready is used to signal us to start
processing the upload and download items...
"""
self.uploader.start_uploading() # synchronous
return self.downloader.start_downloading()
def finish(self):
d = self.uploader.stop()
d2 = self.downloader.stop()
d.addCallback(lambda ign: d2)
return d
def remove_service(self):
return service.MultiService.disownServiceParent(self)
class QueueMixin(HookMixin):
scan_interval = 0
def __init__(self, client, local_path_u, db, name, clock, delay=0):
self._client = client
self._local_path_u = local_path_u
self._local_filepath = to_filepath(local_path_u)
self._db = db
self._name = name
self._clock = clock
self._hooks = {
'processed': None,
'started': None,
'iteration': None,
}
self.started_d = self.set_hook('started')
if not self._local_filepath.exists():
raise AssertionError("The '[magic_folder] local.directory' parameter was %s "
"but there is no directory at that location."
% quote_local_unicode_path(self._local_path_u))
if not self._local_filepath.isdir():
raise AssertionError("The '[magic_folder] local.directory' parameter was %s "
"but the thing at that location is not a directory."
% quote_local_unicode_path(self._local_path_u))
self._deque = deque()
# do we also want to bound on "maximum age"?
self._process_history = deque(maxlen=20)
self._stopped = False
# XXX pass in an initial value for this; it seems like .10 broke this and it's always 0
self._turn_delay = delay
self._log('delay is %f' % self._turn_delay)
# a Deferred to wait for the _do_processing() loop to exit
# (gets set to the return from _do_processing() if we get that
# far)
self._processing = defer.succeed(None)
def get_status(self):
"""
Returns an iterable of instances that implement IQueuedItem
"""
for item in self._deque:
yield item
for item in self._process_history:
yield item
def _get_filepath(self, relpath_u):
self._log("_get_filepath(%r)" % (relpath_u,))
return extend_filepath(self._local_filepath, relpath_u.split(u"/"))
def _begin_processing(self, res):
self._log("starting processing loop")
self._processing = self._do_processing()
# if there are any errors coming out of _do_processing then
# our loop is done and we're hosed (i.e. _do_processing()
# itself has a bug in it)
def fatal_error(f):
self._log("internal error: %s" % (f.value,))
self._log(f)
self._processing.addErrback(fatal_error)
return res
@defer.inlineCallbacks
def _do_processing(self):
"""
This is an infinite loop that processes things out of the _deque.
One iteration runs self._process_deque which calls
_when_queue_is_empty() and then completely drains the _deque
(processing each item). After that we yield for _turn_deque
seconds.
"""
# we subtract here so there's a scan on the very first iteration
last_scan = self._clock.seconds() - self.scan_interval
while not self._stopped:
self._log("doing iteration")
d = task.deferLater(self._clock, self._turn_delay, lambda: None)
# ">=" is important here if scan scan_interval is 0
if self._clock.seconds() - last_scan >= self.scan_interval:
# XXX can't we unify the "_full_scan" vs what
# Downloader does...
last_scan = self._clock.seconds()
yield self._when_queue_is_empty() # (this no-op for us, only Downloader uses it...)
self._log("did scan; now %d" % last_scan)
else:
self._log("skipped scan")
# process anything in our queue
yield self._process_deque()
self._log("one loop; call_hook iteration %r" % self)
self._call_hook(None, 'iteration')
# we want to have our callLater queued in the reactor
# *before* we trigger the 'iteration' hook, so that hook
# can successfully advance the Clock and bypass the delay
# if required (e.g. in the tests).
if not self._stopped:
self._log("waiting... %r" % d)
yield d
self._log("stopped")
def _when_queue_is_empty(self):
return
@defer.inlineCallbacks
def _process_deque(self):
self._log("_process_deque %r" % (self._deque,))
# process everything currently in the queue. we're turning it
# into a list so that if any new items get added while we're
# processing, they'll not run until next time)
to_process = list(self._deque)
self._deque.clear()
self._count('objects_queued', -len(to_process))
self._log("%d items to process" % len(to_process), )
for item in to_process:
self._process_history.appendleft(item)
try:
self._log(" processing '%r'" % (item,))
proc = yield self._process(item)
self._log(" done: %r" % proc)
except Exception as e:
log.err("processing '%r' failed: %s" % (item, e))
proc = None # actually in old _lazy_tail way, proc would be Failure
# XXX can we just get rid of the hooks now?
yield self._call_hook(proc, 'processed')
def _get_relpath(self, filepath):
self._log("_get_relpath(%r)" % (filepath,))
segments = unicode_segments_from(filepath, self._local_filepath)
self._log("segments = %r" % (segments,))
return u"/".join(segments)
def _count(self, counter_name, delta=1):
ctr = 'magic_folder.%s.%s' % (self._name, counter_name)
self._client.stats_provider.count(ctr, delta)
self._log("%s += %r (now %r)" % (counter_name, delta, self._client.stats_provider.counters[ctr]))
def _logcb(self, res, msg):
self._log("%s: %r" % (msg, res))
return res
def _log(self, msg):
s = "Magic Folder %s %s: %s" % (quote_output(self._client.nickname), self._name, msg)
self._client.log(s)
# this isn't in interfaces.py because it's very specific to QueueMixin
class IQueuedItem(Interface):
relpath_u = Attribute("The path this item represents")
progress = Attribute("A PercentProgress instance")
def set_status(self, status, current_time=None):
"""
"""
def status_time(self, state):
"""
Get the time of particular state change, or None
"""
def status_history(self):
"""
All status changes, sorted latest -> oldest
"""
@implementer(IQueuedItem)
class QueuedItem(object):
def __init__(self, relpath_u, progress):
self.relpath_u = relpath_u
self.progress = progress
self._status_history = dict()
def set_status(self, status, current_time=None):
if current_time is None:
current_time = time.time()
self._status_history[status] = current_time
def status_time(self, state):
"""
Returns None if there's no status-update for 'state', else returns
the timestamp when that state was reached.
"""
return self._status_history.get(state, None)
def status_history(self):
"""
Returns a list of 2-tuples of (state, timestamp) sorted by timestamp
"""
hist = self._status_history.items()
hist.sort(lambda a, b: cmp(a[1], b[1]))
return hist
class UploadItem(QueuedItem):
"""
Represents a single item the _deque of the Uploader
"""
pass
class Uploader(QueueMixin):
def __init__(self, client, local_path_u, db, upload_dirnode, pending_delay, clock):
QueueMixin.__init__(self, client, local_path_u, db, 'uploader', clock, delay=pending_delay)
self.is_ready = False
if not IDirectoryNode.providedBy(upload_dirnode):
raise AssertionError("The URI in '%s' does not refer to a directory."
% os.path.join('private', 'magic_folder_dircap'))
if upload_dirnode.is_unknown() or upload_dirnode.is_readonly():
raise AssertionError("The URI in '%s' is not a writecap to a directory."
% os.path.join('private', 'magic_folder_dircap'))
self._upload_dirnode = upload_dirnode
self._inotify = get_inotify_module()
self._notifier = self._inotify.INotify()
self._pending = set() # of unicode relpaths
self._periodic_full_scan_duration = 10 * 60 # perform a full scan every 10 minutes
if hasattr(self._notifier, 'set_pending_delay'):
self._notifier.set_pending_delay(pending_delay)
# TODO: what about IN_MOVE_SELF and IN_UNMOUNT?
#
self.mask = ( self._inotify.IN_CREATE
| self._inotify.IN_CLOSE_WRITE
| self._inotify.IN_MOVED_TO
| self._inotify.IN_MOVED_FROM
| self._inotify.IN_DELETE
| self._inotify.IN_ONLYDIR
| IN_EXCL_UNLINK
)
self._notifier.watch(self._local_filepath, mask=self.mask, callbacks=[self._notify],
recursive=False)#True)
def start_monitoring(self):
self._log("start_monitoring")
d = defer.succeed(None)
d.addCallback(lambda ign: self._notifier.startReading())
d.addCallback(lambda ign: self._count('dirs_monitored'))
d.addBoth(self._call_hook, 'started')
return d
def stop(self):
self._log("stop")
self._notifier.stopReading()
self._count('dirs_monitored', -1)
self.periodic_callid.cancel()
if hasattr(self._notifier, 'wait_until_stopped'):
d = self._notifier.wait_until_stopped()
else:
d = defer.succeed(None)
self._stopped = True
# wait for processing loop to actually exit
d.addCallback(lambda ign: self._processing)
return d
def start_uploading(self):
self._log("start_uploading")
self.is_ready = True
all_relpaths = self._db.get_all_relpaths()
self._log("all relpaths: %r" % (all_relpaths,))
for relpath_u in all_relpaths:
self._add_pending(relpath_u)
self._full_scan()
# XXX changed this while re-basing; double check we can
# *really* just call this synchronously.
return self._begin_processing(None)
def _full_scan(self):
self.periodic_callid = self._clock.callLater(self._periodic_full_scan_duration, self._full_scan)
self._log("FULL SCAN")
self._log("_pending %r" % (self._pending))
self._scan(u"")
def _add_pending(self, relpath_u):
self._log("add pending %r" % (relpath_u,))
if magicpath.should_ignore_file(relpath_u):
self._log("_add_pending %r but should_ignore()==True" % (relpath_u,))
return
if relpath_u in self._pending:
self._log("_add_pending %r but already pending" % (relpath_u,))
return
self._pending.add(relpath_u)
progress = PercentProgress()
item = UploadItem(relpath_u, progress)
item.set_status('queued', self._clock.seconds())
self._deque.append(item)
self._count('objects_queued')
self._log("_add_pending(%r) queued item" % (relpath_u,))
def _scan(self, reldir_u):
# Scan a directory by (synchronously) adding the paths of all its children to self._pending.
# Note that this doesn't add them to the deque -- that will
self._log("SCAN '%r'" % (reldir_u,))
fp = self._get_filepath(reldir_u)
try:
children = listdir_filepath(fp)
except EnvironmentError:
raise Exception("WARNING: magic folder: permission denied on directory %s"
% quote_filepath(fp))
except FilenameEncodingError:
raise Exception("WARNING: magic folder: could not list directory %s due to a filename encoding error"
% quote_filepath(fp))
for child in children:
self._log(" scan; child %r" % (child,))
_assert(isinstance(child, unicode), child=child)
self._add_pending("%s/%s" % (reldir_u, child) if reldir_u != u"" else child)
def is_pending(self, relpath_u):
return relpath_u in self._pending
def _notify(self, opaque, path, events_mask):
self._log("inotify event %r, %r, %r\n" % (opaque, path, ', '.join(self._inotify.humanReadableMask(events_mask))))
relpath_u = self._get_relpath(path)
# We filter out IN_CREATE events not associated with a directory.
# Acting on IN_CREATE for files could cause us to read and upload
# a possibly-incomplete file before the application has closed it.
# There should always be an IN_CLOSE_WRITE after an IN_CREATE, I think.
# It isn't possible to avoid watching for IN_CREATE at all, because
# it is the only event notified for a directory creation.
if ((events_mask & self._inotify.IN_CREATE) != 0 and
(events_mask & self._inotify.IN_ISDIR) == 0):
self._log("ignoring event for %r (creation of non-directory)\n" % (relpath_u,))
return
if relpath_u in self._pending:
self._log("not queueing %r because it is already pending" % (relpath_u,))
return
if magicpath.should_ignore_file(relpath_u):
self._log("ignoring event for %r (ignorable path)" % (relpath_u,))
return
self._add_pending(relpath_u)
def _process(self, item):
# Uploader
relpath_u = item.relpath_u
self._log("_process(%r)" % (relpath_u,))
item.set_status('started', self._clock.seconds())
if relpath_u is None:
item.set_status('invalid_path', self._clock.seconds())
return defer.succeed(False)
precondition(isinstance(relpath_u, unicode), relpath_u)
precondition(not relpath_u.endswith(u'/'), relpath_u)
d = defer.succeed(False)
def _maybe_upload(ign, now=None):
self._log("_maybe_upload: relpath_u=%r, now=%r" % (relpath_u, now))
if now is None:
now = time.time()
fp = self._get_filepath(relpath_u)
pathinfo = get_pathinfo(unicode_from_filepath(fp))
self._log("about to remove %r from pending set %r" %
(relpath_u, self._pending))
try:
self._pending.remove(relpath_u)
except KeyError:
self._log("WRONG that %r wasn't in pending" % (relpath_u,))
encoded_path_u = magicpath.path2magic(relpath_u)
if not pathinfo.exists:
# FIXME merge this with the 'isfile' case.
self._log("notified object %s disappeared (this is normal)" % quote_filepath(fp))
self._count('objects_disappeared')
db_entry = self._db.get_db_entry(relpath_u)
if db_entry is None:
return False
last_downloaded_timestamp = now # is this correct?
if is_new_file(pathinfo, db_entry):
new_version = db_entry.version + 1
else:
self._log("Not uploading %r" % (relpath_u,))
self._count('objects_not_uploaded')
return False
metadata = {
'version': new_version,
'deleted': True,
'last_downloaded_timestamp': last_downloaded_timestamp,
}
if db_entry.last_downloaded_uri is not None:
metadata['last_downloaded_uri'] = db_entry.last_downloaded_uri
empty_uploadable = Data("", self._client.convergence)
d2 = self._upload_dirnode.add_file(
encoded_path_u, empty_uploadable,
metadata=metadata,
overwrite=True,
progress=item.progress,
)
def _add_db_entry(filenode):
filecap = filenode.get_uri()
last_downloaded_uri = metadata.get('last_downloaded_uri', None)
self._db.did_upload_version(relpath_u, new_version, filecap,
last_downloaded_uri, last_downloaded_timestamp,
pathinfo)
self._count('files_uploaded')
d2.addCallback(_add_db_entry)
d2.addCallback(lambda ign: True)
return d2
elif pathinfo.islink:
self.warn("WARNING: cannot upload symlink %s" % quote_filepath(fp))
return False
elif pathinfo.isdir:
self._log("ISDIR")
if not getattr(self._notifier, 'recursive_includes_new_subdirectories', False):
self._notifier.watch(fp, mask=self.mask, callbacks=[self._notify], recursive=True)
db_entry = self._db.get_db_entry(relpath_u)
self._log("isdir dbentry %r" % (db_entry,))
if not is_new_file(pathinfo, db_entry):
self._log("NOT A NEW FILE")
return False
uploadable = Data("", self._client.convergence)
encoded_path_u += magicpath.path2magic(u"/")
self._log("encoded_path_u = %r" % (encoded_path_u,))
upload_d = self._upload_dirnode.add_file(
encoded_path_u, uploadable,
metadata={"version": 0},
overwrite=True,
progress=item.progress,
)
def _dir_succeeded(ign):
self._log("created subdirectory %r" % (relpath_u,))
self._count('directories_created')
def _dir_failed(f):
self._log("failed to create subdirectory %r" % (relpath_u,))
return f
upload_d.addCallbacks(_dir_succeeded, _dir_failed)
upload_d.addCallback(lambda ign: self._scan(relpath_u))
upload_d.addCallback(lambda ign: True)
return upload_d
elif pathinfo.isfile:
db_entry = self._db.get_db_entry(relpath_u)
last_downloaded_timestamp = now
if db_entry is None:
new_version = 0
elif is_new_file(pathinfo, db_entry):
new_version = db_entry.version + 1
else:
self._log("Not uploading %r" % (relpath_u,))
self._count('objects_not_uploaded')
return False
metadata = {
'version': new_version,
'last_downloaded_timestamp': last_downloaded_timestamp,
}
if db_entry is not None and db_entry.last_downloaded_uri is not None:
metadata['last_downloaded_uri'] = db_entry.last_downloaded_uri
uploadable = FileName(unicode_from_filepath(fp), self._client.convergence)
d2 = self._upload_dirnode.add_file(
encoded_path_u, uploadable,
metadata=metadata,
overwrite=True,
progress=item.progress,
)
def _add_db_entry(filenode):
filecap = filenode.get_uri()
last_downloaded_uri = metadata.get('last_downloaded_uri', None)
self._db.did_upload_version(relpath_u, new_version, filecap,
last_downloaded_uri, last_downloaded_timestamp,
pathinfo)
self._count('files_uploaded')
return True
d2.addCallback(_add_db_entry)
return d2
else:
self.warn("WARNING: cannot process special file %s" % quote_filepath(fp))
return False
d.addCallback(_maybe_upload)
def _succeeded(res):
self._log("_succeeded(%r)" % (res,))
if res:
self._count('objects_succeeded')
# TODO: maybe we want the status to be 'ignored' if res is False
item.set_status('success', self._clock.seconds())
return res
def _failed(f):
self._count('objects_failed')
self._log("%s while processing %r" % (f, relpath_u))
item.set_status('failure', self._clock.seconds())
return f
d.addCallbacks(_succeeded, _failed)
return d
def _get_metadata(self, encoded_path_u):
try:
d = self._upload_dirnode.get_metadata_for(encoded_path_u)
except KeyError:
return Failure()
return d
def _get_filenode(self, encoded_path_u):
try:
d = self._upload_dirnode.get(encoded_path_u)
except KeyError:
return Failure()
return d
class WriteFileMixin(object):
FUDGE_SECONDS = 10.0
def _get_conflicted_filename(self, abspath_u):
return abspath_u + u".conflict"
def _write_downloaded_file(self, local_path_u, abspath_u, file_contents, is_conflict=False, now=None):
self._log("_write_downloaded_file(%r, <%d bytes>, is_conflict=%r, now=%r)"
% (abspath_u, len(file_contents), is_conflict, now))
# 1. Write a temporary file, say .foo.tmp.
# 2. is_conflict determines whether this is an overwrite or a conflict.
# 3. Set the mtime of the replacement file to be T seconds before the
# current local time.
# 4. Perform a file replacement with backup filename foo.backup,
# replaced file foo, and replacement file .foo.tmp. If any step of
# this operation fails, reclassify as a conflict and stop.
#
# Returns the path of the destination file.
precondition_abspath(abspath_u)
replacement_path_u = abspath_u + u".tmp" # FIXME more unique
backup_path_u = abspath_u + u".backup"
if now is None:
now = time.time()
initial_path_u = os.path.dirname(abspath_u)
fileutil.make_dirs_with_absolute_mode(local_path_u, initial_path_u, (~ self._umask) & 0777)
fileutil.write(replacement_path_u, file_contents)
os.chmod(replacement_path_u, (~ self._umask) & 0777)
# FUDGE_SECONDS is used to determine if another process
# has written to the same file concurrently. This is described
# in the Earth Dragon section of our design document:
# docs/proposed/magic-folder/remote-to-local-sync.rst
os.utime(replacement_path_u, (now, now - self.FUDGE_SECONDS))
if is_conflict:
return self._rename_conflicted_file(abspath_u, replacement_path_u)
else:
try:
fileutil.replace_file(abspath_u, replacement_path_u, backup_path_u)
return abspath_u
except fileutil.ConflictError:
return self._rename_conflicted_file(abspath_u, replacement_path_u)
def _rename_conflicted_file(self, abspath_u, replacement_path_u):
self._log("_rename_conflicted_file(%r, %r)" % (abspath_u, replacement_path_u))
conflict_path_u = self._get_conflicted_filename(abspath_u)
if False:
if os.path.isfile(replacement_path_u):
print "%r exists" % (replacement_path_u,)
if os.path.isfile(conflict_path_u):
print "%r exists" % (conflict_path_u,)
fileutil.rename_no_overwrite(replacement_path_u, conflict_path_u)
return conflict_path_u
def _rename_deleted_file(self, abspath_u):
self._log('renaming deleted file to backup: %s' % (abspath_u,))
try:
fileutil.rename_no_overwrite(abspath_u, abspath_u + u'.backup')
except OSError:
self._log("Already gone: '%s'" % (abspath_u,))
return abspath_u
class DownloadItem(QueuedItem):
"""
Represents a single item in the _deque of the Downloader
"""
def __init__(self, relpath_u, progress, filenode, metadata):
super(DownloadItem, self).__init__(relpath_u, progress)
self.file_node = filenode
self.metadata = metadata
class Downloader(QueueMixin, WriteFileMixin):
scan_interval = 3
def __init__(self, client, local_path_u, db, collective_dirnode,
upload_readonly_dircap, clock, is_upload_pending, umask):
QueueMixin.__init__(self, client, local_path_u, db, 'downloader', clock, delay=self.scan_interval)
if not IDirectoryNode.providedBy(collective_dirnode):
raise AssertionError("The URI in '%s' does not refer to a directory."
% os.path.join('private', 'collective_dircap'))
if collective_dirnode.is_unknown() or not collective_dirnode.is_readonly():
raise AssertionError("The URI in '%s' is not a readonly cap to a directory."
% os.path.join('private', 'collective_dircap'))
self._collective_dirnode = collective_dirnode
self._upload_readonly_dircap = upload_readonly_dircap
self._is_upload_pending = is_upload_pending
self._umask = umask
def start_downloading(self):
self._log("start_downloading")
self._turn_delay = self.scan_interval
files = self._db.get_all_relpaths()
self._log("all files %s" % files)
d = self._scan_remote_collective(scan_self=True)
d.addBoth(self._logcb, "after _scan_remote_collective 0")
d.addCallback(self._begin_processing)
return d
def stop(self):
self._log("stop")
self._stopped = True
d = defer.succeed(None)
# wait for processing loop to actually exit
d.addCallback(lambda ign: self._processing)
return d
def _should_download(self, relpath_u, remote_version):
"""
_should_download returns a bool indicating whether or not a remote object should be downloaded.
We check the remote metadata version against our magic-folder db version number;
latest version wins.
"""
self._log("_should_download(%r, %r)" % (relpath_u, remote_version))
if magicpath.should_ignore_file(relpath_u):
self._log("nope")
return False
self._log("yep")
db_entry = self._db.get_db_entry(relpath_u)
if db_entry is None:
return True
self._log("version %r" % (db_entry.version,))
return (db_entry.version < remote_version)
def _get_local_latest(self, relpath_u):
"""
_get_local_latest takes a unicode path string checks to see if this file object
exists in our magic-folder db; if not then return None
else check for an entry in our magic-folder db and return the version number.
"""
if not self._get_filepath(relpath_u).exists():
return None
db_entry = self._db.get_db_entry(relpath_u)
return None if db_entry is None else db_entry.version
def _get_collective_latest_file(self, filename):
"""
_get_collective_latest_file takes a file path pointing to a file managed by
magic-folder and returns a deferred that fires with the two tuple containing a
file node and metadata for the latest version of the file located in the
magic-folder collective directory.
"""
collective_dirmap_d = self._collective_dirnode.list()
def scan_collective(result):
list_of_deferreds = []
for dir_name in result.keys():
# XXX make sure it's a directory
d = defer.succeed(None)
d.addCallback(lambda x, dir_name=dir_name: result[dir_name][0].get_child_and_metadata(filename))
list_of_deferreds.append(d)
deferList = defer.DeferredList(list_of_deferreds, consumeErrors=True)
return deferList
collective_dirmap_d.addCallback(scan_collective)
def highest_version(deferredList):
max_version = 0
metadata = None
node = None
for success, result in deferredList:
if success:
if node is None or result[1]['version'] > max_version:
node, metadata = result
max_version = result[1]['version']
return node, metadata
collective_dirmap_d.addCallback(highest_version)
return collective_dirmap_d
def _scan_remote_dmd(self, nickname, dirnode, scan_batch):
self._log("_scan_remote_dmd nickname %r" % (nickname,))
d = dirnode.list()
def scan_listing(listing_map):
for encoded_relpath_u in listing_map.keys():
relpath_u = magicpath.magic2path(encoded_relpath_u)
self._log("found %r" % (relpath_u,))
file_node, metadata = listing_map[encoded_relpath_u]
local_version = self._get_local_latest(relpath_u)
remote_version = metadata.get('version', None)
self._log("%r has local version %r, remote version %r" % (relpath_u, local_version, remote_version))
if local_version is None or remote_version is None or local_version < remote_version:
self._log("%r added to download queue" % (relpath_u,))
if scan_batch.has_key(relpath_u):
scan_batch[relpath_u] += [(file_node, metadata)]
else:
scan_batch[relpath_u] = [(file_node, metadata)]
d.addCallback(scan_listing)
d.addBoth(self._logcb, "end of _scan_remote_dmd")
return d
def _scan_remote_collective(self, scan_self=False):
self._log("_scan_remote_collective")
scan_batch = {} # path -> [(filenode, metadata)]
d = self._collective_dirnode.list()
def scan_collective(dirmap):
d2 = defer.succeed(None)
for dir_name in dirmap:
(dirnode, metadata) = dirmap[dir_name]
if scan_self or dirnode.get_readonly_uri() != self._upload_readonly_dircap:
d2.addCallback(lambda ign, dir_name=dir_name, dirnode=dirnode:
self._scan_remote_dmd(dir_name, dirnode, scan_batch))
def _err(f, dir_name=dir_name):
self._log("failed to scan DMD for client %r: %s" % (dir_name, f))
# XXX what should we do to make this failure more visible to users?
d2.addErrback(_err)
return d2
d.addCallback(scan_collective)
def _filter_batch_to_deque(ign):
self._log("deque = %r, scan_batch = %r" % (self._deque, scan_batch))
for relpath_u in scan_batch.keys():
file_node, metadata = max(scan_batch[relpath_u], key=lambda x: x[1]['version'])
if self._should_download(relpath_u, metadata['version']):
to_dl = DownloadItem(
relpath_u,
PercentProgress(file_node.get_size()),
file_node,
metadata,
)
to_dl.set_status('queued', self._clock.seconds())
self._deque.append(to_dl)
self._count("objects_queued")
else:
self._log("Excluding %r" % (relpath_u,))
self._call_hook(None, 'processed', async=True) # await this maybe-Deferred??
self._log("deque after = %r" % (self._deque,))
d.addCallback(_filter_batch_to_deque)
return d
@defer.inlineCallbacks
def _when_queue_is_empty(self):
# XXX can we amalgamate all the "scan" stuff and just call it
# directly from QueueMixin?
x = None
try:
x = yield self._scan(None)
except Exception as e:
self._log("_scan failed: %s" % (repr(e),))
defer.returnValue(x)
def _scan(self, ign):
return self._scan_remote_collective()
def _process(self, item):
# Downloader
self._log("_process(%r)" % (item,))
now = self._clock.seconds()
self._log("started! %s" % (now,))
item.set_status('started', now)
fp = self._get_filepath(item.relpath_u)
abspath_u = unicode_from_filepath(fp)
conflict_path_u = self._get_conflicted_filename(abspath_u)
d = defer.succeed(False)
def do_update_db(written_abspath_u):
filecap = item.file_node.get_uri()
last_uploaded_uri = item.metadata.get('last_uploaded_uri', None)
self._log("DOUPDATEDB %r" % written_abspath_u)
last_downloaded_uri = filecap
last_downloaded_timestamp = now
written_pathinfo = get_pathinfo(written_abspath_u)
if not written_pathinfo.exists and not item.metadata.get('deleted', False):
raise Exception("downloaded object %s disappeared" % quote_local_unicode_path(written_abspath_u))
self._db.did_upload_version(
item.relpath_u, item.metadata['version'], last_uploaded_uri,
last_downloaded_uri, last_downloaded_timestamp, written_pathinfo,
)
self._count('objects_downloaded')
item.set_status('success', self._clock.seconds())
return True
def failed(f):
item.set_status('failure', self._clock.seconds())
self._log("download failed: %s" % (str(f),))
self._count('objects_failed')
return f
if os.path.isfile(conflict_path_u):
def fail(res):
raise ConflictError("download failed: already conflicted: %r" % (item.relpath_u,))
d.addCallback(fail)
else:
is_conflict = False
db_entry = self._db.get_db_entry(item.relpath_u)
dmd_last_downloaded_uri = item.metadata.get('last_downloaded_uri', None)
dmd_last_uploaded_uri = item.metadata.get('last_uploaded_uri', None)
if db_entry:
if dmd_last_downloaded_uri is not None and db_entry.last_downloaded_uri is not None:
if dmd_last_downloaded_uri != db_entry.last_downloaded_uri:
is_conflict = True
self._count('objects_conflicted')
elif dmd_last_uploaded_uri is not None and dmd_last_uploaded_uri != db_entry.last_uploaded_uri:
is_conflict = True
self._count('objects_conflicted')
elif self._is_upload_pending(item.relpath_u):
is_conflict = True
self._count('objects_conflicted')
if item.relpath_u.endswith(u"/"):
if item.metadata.get('deleted', False):
self._log("rmdir(%r) ignored" % (abspath_u,))
else:
self._log("mkdir(%r)" % (abspath_u,))
d.addCallback(lambda ign: fileutil.make_dirs(abspath_u))
d.addCallback(lambda ign: abspath_u)
else:
if item.metadata.get('deleted', False):
d.addCallback(lambda ign: self._rename_deleted_file(abspath_u))
else:
d.addCallback(lambda ign: item.file_node.download_best_version(progress=item.progress))
d.addCallback(lambda contents: self._write_downloaded_file(self._local_path_u, abspath_u, contents,
is_conflict=is_conflict))
d.addCallbacks(do_update_db)
d.addErrback(failed)
def trap_conflicts(f):
f.trap(ConflictError)
self._log("IGNORE CONFLICT ERROR %r" % f)
return False
d.addErrback(trap_conflicts)
return d

View File

@ -1,7 +1,7 @@
import binascii
import time
now = time.time
from time import time as now
from zope.interface import implements
from twisted.internet import defer

View File

@ -0,0 +1,104 @@
import sys
from collections import namedtuple
from allmydata.util.dbutil import get_db, DBError
# magic-folder db schema version 1
SCHEMA_v1 = """
CREATE TABLE version
(
version INTEGER -- contains one row, set to 1
);
CREATE TABLE local_files
(
path VARCHAR(1024) PRIMARY KEY, -- UTF-8 filename relative to local magic folder dir
size INTEGER, -- ST_SIZE, or NULL if the file has been deleted
mtime_ns INTEGER, -- ST_MTIME in nanoseconds
ctime_ns INTEGER, -- ST_CTIME in nanoseconds
version INTEGER,
last_uploaded_uri VARCHAR(256), -- URI:CHK:...
last_downloaded_uri VARCHAR(256), -- URI:CHK:...
last_downloaded_timestamp TIMESTAMP
);
"""
def get_magicfolderdb(dbfile, stderr=sys.stderr,
create_version=(SCHEMA_v1, 1), just_create=False):
# Open or create the given backupdb file. The parent directory must
# exist.
try:
(sqlite3, db) = get_db(dbfile, stderr, create_version,
just_create=just_create, dbname="magicfolderdb")
if create_version[1] in (1, 2):
return MagicFolderDB(sqlite3, db)
else:
print >>stderr, "invalid magicfolderdb schema version specified"
return None
except DBError, e:
print >>stderr, e
return None
PathEntry = namedtuple('PathEntry', 'size mtime_ns ctime_ns version last_uploaded_uri '
'last_downloaded_uri last_downloaded_timestamp')
class MagicFolderDB(object):
VERSION = 1
def __init__(self, sqlite_module, connection):
self.sqlite_module = sqlite_module
self.connection = connection
self.cursor = connection.cursor()
def close(self):
self.connection.close()
def get_db_entry(self, relpath_u):
"""
Retrieve the entry in the database for a given path, or return None
if there is no such entry.
"""
c = self.cursor
c.execute("SELECT size, mtime_ns, ctime_ns, version, last_uploaded_uri,"
" last_downloaded_uri, last_downloaded_timestamp"
" FROM local_files"
" WHERE path=?",
(relpath_u,))
row = self.cursor.fetchone()
if not row:
return None
else:
(size, mtime_ns, ctime_ns, version, last_uploaded_uri,
last_downloaded_uri, last_downloaded_timestamp) = row
return PathEntry(size=size, mtime_ns=mtime_ns, ctime_ns=ctime_ns, version=version,
last_uploaded_uri=last_uploaded_uri,
last_downloaded_uri=last_downloaded_uri,
last_downloaded_timestamp=last_downloaded_timestamp)
def get_all_relpaths(self):
"""
Retrieve a set of all relpaths of files that have had an entry in magic folder db
(i.e. that have been downloaded at least once).
"""
self.cursor.execute("SELECT path FROM local_files")
rows = self.cursor.fetchall()
return set([r[0] for r in rows])
def did_upload_version(self, relpath_u, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp, pathinfo):
try:
self.cursor.execute("INSERT INTO local_files VALUES (?,?,?,?,?,?,?,?)",
(relpath_u, pathinfo.size, pathinfo.mtime_ns, pathinfo.ctime_ns,
version, last_uploaded_uri, last_downloaded_uri,
last_downloaded_timestamp))
except (self.sqlite_module.IntegrityError, self.sqlite_module.OperationalError):
self.cursor.execute("UPDATE local_files"
" SET size=?, mtime_ns=?, ctime_ns=?, version=?, last_uploaded_uri=?,"
" last_downloaded_uri=?, last_downloaded_timestamp=?"
" WHERE path=?",
(pathinfo.size, pathinfo.mtime_ns, pathinfo.ctime_ns, version,
last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp,
relpath_u))
self.connection.commit()

View File

@ -0,0 +1,33 @@
import re
import os.path
from allmydata.util.assertutil import precondition, _assert
def path2magic(path):
return re.sub(ur'[/@]', lambda m: {u'/': u'@_', u'@': u'@@'}[m.group(0)], path)
def magic2path(path):
return re.sub(ur'@[_@]', lambda m: {u'@_': u'/', u'@@': u'@'}[m.group(0)], path)
IGNORE_SUFFIXES = [u'.backup', u'.tmp', u'.conflict']
IGNORE_PREFIXES = [u'.']
def should_ignore_file(path_u):
precondition(isinstance(path_u, unicode), path_u=path_u)
for suffix in IGNORE_SUFFIXES:
if path_u.endswith(suffix):
return True
while path_u != u"":
oldpath_u = path_u
path_u, tail_u = os.path.split(path_u)
if tail_u.startswith(u"."):
return True
if path_u == oldpath_u:
return True # the path was absolute
_assert(len(path_u) < len(oldpath_u), path_u=path_u, oldpath_u=oldpath_u)
return False

View File

@ -173,6 +173,8 @@ class BackupDB_v2:
"""
path = abspath_expanduser_unicode(path)
# TODO: consider using get_pathinfo.
s = os.stat(path)
size = s[stat.ST_SIZE]
ctime = s[stat.ST_CTIME]

View File

@ -136,14 +136,6 @@ def create_node(config, out=sys.stdout, err=sys.stderr):
c.write("enabled = false\n")
c.write("\n")
c.write("[drop_upload]\n")
c.write("# Shall this node automatically upload files created or modified in a local directory?\n")
c.write("enabled = false\n")
c.write("# To specify the target of uploads, a mutable directory writecap URI must be placed\n"
"# in 'private/drop_upload_dircap'.\n")
c.write("local.directory = ~/drop_upload\n")
c.write("\n")
c.close()
from allmydata.util import fileutil

View File

@ -0,0 +1,440 @@
import os
import urllib
from sys import stderr
from types import NoneType
from cStringIO import StringIO
from datetime import datetime
import simplejson
from twisted.python import usage
from allmydata.util.assertutil import precondition
from .common import BaseOptions, BasedirOptions, get_aliases
from .cli import MakeDirectoryOptions, LnOptions, CreateAliasOptions
import tahoe_mv
from allmydata.util.encodingutil import argv_to_abspath, argv_to_unicode, to_str, \
quote_local_unicode_path
from allmydata.scripts.common_http import do_http, BadResponse
from allmydata.util import fileutil
from allmydata.util import configutil
from allmydata import uri
from allmydata.util.abbreviate import abbreviate_space, abbreviate_time
INVITE_SEPARATOR = "+"
class CreateOptions(BasedirOptions):
nickname = None
local_dir = None
synopsis = "MAGIC_ALIAS: [NICKNAME LOCAL_DIR]"
def parseArgs(self, alias, nickname=None, local_dir=None):
BasedirOptions.parseArgs(self)
alias = argv_to_unicode(alias)
if not alias.endswith(u':'):
raise usage.UsageError("An alias must end with a ':' character.")
self.alias = alias[:-1]
self.nickname = None if nickname is None else argv_to_unicode(nickname)
# Expand the path relative to the current directory of the CLI command, not the node.
self.local_dir = None if local_dir is None else argv_to_abspath(local_dir, long_path=False)
if self.nickname and not self.local_dir:
raise usage.UsageError("If NICKNAME is specified then LOCAL_DIR must also be specified.")
node_url_file = os.path.join(self['node-directory'], u"node.url")
self['node-url'] = fileutil.read(node_url_file).strip()
def _delegate_options(source_options, target_options):
target_options.aliases = get_aliases(source_options['node-directory'])
target_options["node-url"] = source_options["node-url"]
target_options["node-directory"] = source_options["node-directory"]
target_options.stdin = StringIO("")
target_options.stdout = StringIO()
target_options.stderr = StringIO()
return target_options
def create(options):
precondition(isinstance(options.alias, unicode), alias=options.alias)
precondition(isinstance(options.nickname, (unicode, NoneType)), nickname=options.nickname)
precondition(isinstance(options.local_dir, (unicode, NoneType)), local_dir=options.local_dir)
from allmydata.scripts import tahoe_add_alias
create_alias_options = _delegate_options(options, CreateAliasOptions())
create_alias_options.alias = options.alias
rc = tahoe_add_alias.create_alias(create_alias_options)
if rc != 0:
print >>options.stderr, create_alias_options.stderr.getvalue()
return rc
print >>options.stdout, create_alias_options.stdout.getvalue()
if options.nickname is not None:
invite_options = _delegate_options(options, InviteOptions())
invite_options.alias = options.alias
invite_options.nickname = options.nickname
rc = invite(invite_options)
if rc != 0:
print >>options.stderr, "magic-folder: failed to invite after create\n"
print >>options.stderr, invite_options.stderr.getvalue()
return rc
invite_code = invite_options.stdout.getvalue().strip()
join_options = _delegate_options(options, JoinOptions())
join_options.local_dir = options.local_dir
join_options.invite_code = invite_code
rc = join(join_options)
if rc != 0:
print >>options.stderr, "magic-folder: failed to join after create\n"
print >>options.stderr, join_options.stderr.getvalue()
return rc
return 0
class InviteOptions(BasedirOptions):
nickname = None
synopsis = "MAGIC_ALIAS: NICKNAME"
stdin = StringIO("")
def parseArgs(self, alias, nickname=None):
BasedirOptions.parseArgs(self)
alias = argv_to_unicode(alias)
if not alias.endswith(u':'):
raise usage.UsageError("An alias must end with a ':' character.")
self.alias = alias[:-1]
self.nickname = argv_to_unicode(nickname)
node_url_file = os.path.join(self['node-directory'], u"node.url")
self['node-url'] = open(node_url_file, "r").read().strip()
aliases = get_aliases(self['node-directory'])
self.aliases = aliases
def invite(options):
precondition(isinstance(options.alias, unicode), alias=options.alias)
precondition(isinstance(options.nickname, unicode), nickname=options.nickname)
from allmydata.scripts import tahoe_mkdir
mkdir_options = _delegate_options(options, MakeDirectoryOptions())
mkdir_options.where = None
rc = tahoe_mkdir.mkdir(mkdir_options)
if rc != 0:
print >>options.stderr, "magic-folder: failed to mkdir\n"
return rc
# FIXME this assumes caps are ASCII.
dmd_write_cap = mkdir_options.stdout.getvalue().strip()
dmd_readonly_cap = uri.from_string(dmd_write_cap).get_readonly().to_string()
if dmd_readonly_cap is None:
print >>options.stderr, "magic-folder: failed to diminish dmd write cap\n"
return 1
magic_write_cap = get_aliases(options["node-directory"])[options.alias]
magic_readonly_cap = uri.from_string(magic_write_cap).get_readonly().to_string()
# tahoe ln CLIENT_READCAP COLLECTIVE_WRITECAP/NICKNAME
ln_options = _delegate_options(options, LnOptions())
ln_options.from_file = unicode(dmd_readonly_cap, 'utf-8')
ln_options.to_file = u"%s/%s" % (unicode(magic_write_cap, 'utf-8'), options.nickname)
rc = tahoe_mv.mv(ln_options, mode="link")
if rc != 0:
print >>options.stderr, "magic-folder: failed to create link\n"
print >>options.stderr, ln_options.stderr.getvalue()
return rc
# FIXME: this assumes caps are ASCII.
print >>options.stdout, "%s%s%s" % (magic_readonly_cap, INVITE_SEPARATOR, dmd_write_cap)
return 0
class JoinOptions(BasedirOptions):
synopsis = "INVITE_CODE LOCAL_DIR"
dmd_write_cap = ""
magic_readonly_cap = ""
def parseArgs(self, invite_code, local_dir):
BasedirOptions.parseArgs(self)
# Expand the path relative to the current directory of the CLI command, not the node.
self.local_dir = None if local_dir is None else argv_to_abspath(local_dir, long_path=False)
self.invite_code = to_str(argv_to_unicode(invite_code))
def join(options):
fields = options.invite_code.split(INVITE_SEPARATOR)
if len(fields) != 2:
raise usage.UsageError("Invalid invite code.")
magic_readonly_cap, dmd_write_cap = fields
dmd_cap_file = os.path.join(options["node-directory"], u"private", u"magic_folder_dircap")
collective_readcap_file = os.path.join(options["node-directory"], u"private", u"collective_dircap")
magic_folder_db_file = os.path.join(options["node-directory"], u"private", u"magicfolderdb.sqlite")
if os.path.exists(dmd_cap_file) or os.path.exists(collective_readcap_file) or os.path.exists(magic_folder_db_file):
print >>options.stderr, ("\nThis client has already joined a magic folder."
"\nUse the 'tahoe magic-folder leave' command first.\n")
return 1
fileutil.write(dmd_cap_file, dmd_write_cap)
fileutil.write(collective_readcap_file, magic_readonly_cap)
config = configutil.get_config(os.path.join(options["node-directory"], u"tahoe.cfg"))
configutil.set_config(config, "magic_folder", "enabled", "True")
configutil.set_config(config, "magic_folder", "local.directory", options.local_dir.encode('utf-8'))
configutil.write_config(os.path.join(options["node-directory"], u"tahoe.cfg"), config)
return 0
class LeaveOptions(BasedirOptions):
synopsis = ""
def parseArgs(self):
BasedirOptions.parseArgs(self)
def leave(options):
from ConfigParser import SafeConfigParser
dmd_cap_file = os.path.join(options["node-directory"], u"private", u"magic_folder_dircap")
collective_readcap_file = os.path.join(options["node-directory"], u"private", u"collective_dircap")
magic_folder_db_file = os.path.join(options["node-directory"], u"private", u"magicfolderdb.sqlite")
parser = SafeConfigParser()
parser.read(os.path.join(options["node-directory"], u"tahoe.cfg"))
parser.remove_section("magic_folder")
f = open(os.path.join(options["node-directory"], u"tahoe.cfg"), "w")
parser.write(f)
f.close()
for f in [dmd_cap_file, collective_readcap_file, magic_folder_db_file]:
try:
fileutil.remove(f)
except Exception as e:
print >>options.stderr, ("Warning: unable to remove %s due to %s: %s"
% (quote_local_unicode_path(f), e.__class__.__name__, str(e)))
# if this doesn't return 0, then the CLI stuff fails
return 0
class StatusOptions(BasedirOptions):
nickname = None
synopsis = ""
stdin = StringIO("")
def parseArgs(self):
BasedirOptions.parseArgs(self)
node_url_file = os.path.join(self['node-directory'], u"node.url")
with open(node_url_file, "r") as f:
self['node-url'] = f.read().strip()
def _get_json_for_fragment(options, fragment, method='GET', post_args=None):
nodeurl = options['node-url']
if nodeurl.endswith('/'):
nodeurl = nodeurl[:-1]
url = u'%s/%s' % (nodeurl, fragment)
if method == 'POST':
if post_args is None:
raise ValueError("Must pass post_args= for POST method")
body = urllib.urlencode(post_args)
else:
body = ''
if post_args is not None:
raise ValueError("post_args= only valid for POST method")
resp = do_http(method, url, body=body)
if isinstance(resp, BadResponse):
# specifically NOT using format_http_error() here because the
# URL is pretty sensitive (we're doing /uri/<key>).
raise RuntimeError(
"Failed to get json from '%s': %s" % (nodeurl, resp.error)
)
data = resp.read()
try:
parsed = simplejson.loads(data)
except Exception:
print "Failed to parse reply:\n%s" % (data,)
return []
if parsed is None:
raise RuntimeError("No data from '%s'" % (nodeurl,))
return parsed
def _get_json_for_cap(options, cap):
return _get_json_for_fragment(
options,
'uri/%s?t=json' % urllib.quote(cap),
)
def _print_item_status(item, now, longest):
paddedname = (' ' * (longest - len(item['path']))) + item['path']
if 'failure_at' in item:
ts = datetime.fromtimestamp(item['started_at'])
prog = 'Failed %s (%s)' % (abbreviate_time(now - ts), ts)
elif item['percent_done'] < 100.0:
if 'started_at' not in item:
prog = 'not yet started'
else:
so_far = now - datetime.fromtimestamp(item['started_at'])
if so_far.seconds > 0.0:
rate = item['percent_done'] / so_far.seconds
if rate != 0:
time_left = (100.0 - item['percent_done']) / rate
prog = '%2.1f%% done, around %s left' % (
item['percent_done'],
abbreviate_time(time_left),
)
else:
time_left = None
prog = '%2.1f%% done' % (item['percent_done'],)
else:
prog = 'just started'
else:
prog = ''
for verb in ['finished', 'started', 'queued']:
keyname = verb + '_at'
if keyname in item:
when = datetime.fromtimestamp(item[keyname])
prog = '%s %s' % (verb, abbreviate_time(now - when))
break
print " %s: %s" % (paddedname, prog)
def status(options):
nodedir = options["node-directory"]
with open(os.path.join(nodedir, u"private", u"magic_folder_dircap")) as f:
dmd_cap = f.read().strip()
with open(os.path.join(nodedir, u"private", u"collective_dircap")) as f:
collective_readcap = f.read().strip()
try:
captype, dmd = _get_json_for_cap(options, dmd_cap)
if captype != 'dirnode':
print >>stderr, "magic_folder_dircap isn't a directory capability"
return 2
except RuntimeError as e:
print >>stderr, str(e)
return 1
now = datetime.now()
print "Local files:"
for (name, child) in dmd['children'].items():
captype, meta = child
status = 'good'
size = meta['size']
created = datetime.fromtimestamp(meta['metadata']['tahoe']['linkcrtime'])
version = meta['metadata']['version']
nice_size = abbreviate_space(size)
nice_created = abbreviate_time(now - created)
if captype != 'filenode':
print "%20s: error, should be a filecap" % name
continue
print " %s (%s): %s, version=%s, created %s" % (name, nice_size, status, version, nice_created)
captype, collective = _get_json_for_cap(options, collective_readcap)
print
print "Remote files:"
for (name, data) in collective['children'].items():
if data[0] != 'dirnode':
print "Error: '%s': expected a dirnode, not '%s'" % (name, data[0])
print " %s's remote:" % name
dmd = _get_json_for_cap(options, data[1]['ro_uri'])
if dmd[0] != 'dirnode':
print "Error: should be a dirnode"
continue
for (n, d) in dmd[1]['children'].items():
if d[0] != 'filenode':
print "Error: expected '%s' to be a filenode." % (n,)
meta = d[1]
status = 'good'
size = meta['size']
created = datetime.fromtimestamp(meta['metadata']['tahoe']['linkcrtime'])
version = meta['metadata']['version']
nice_size = abbreviate_space(size)
nice_created = abbreviate_time(now - created)
print " %s (%s): %s, version=%s, created %s" % (n, nice_size, status, version, nice_created)
with open(os.path.join(nodedir, u'private', u'api_auth_token'), 'rb') as f:
token = f.read()
magicdata = _get_json_for_fragment(
options,
'magic_folder?t=json',
method='POST',
post_args=dict(
t='json',
token=token,
)
)
if len(magicdata):
uploads = [item for item in magicdata if item['kind'] == 'upload']
downloads = [item for item in magicdata if item['kind'] == 'download']
longest = max([len(item['path']) for item in magicdata])
if True: # maybe --show-completed option or something?
uploads = [item for item in uploads if item['status'] != 'success']
downloads = [item for item in downloads if item['status'] != 'success']
if len(uploads):
print
print "Uploads:"
for item in uploads:
_print_item_status(item, now, longest)
if len(downloads):
print
print "Downloads:"
for item in downloads:
_print_item_status(item, now, longest)
for item in magicdata:
if item['status'] == 'failure':
print "Failed:", item
return 0
class MagicFolderCommand(BaseOptions):
subCommands = [
["create", None, CreateOptions, "Create a Magic Folder."],
["invite", None, InviteOptions, "Invite someone to a Magic Folder."],
["join", None, JoinOptions, "Join a Magic Folder."],
["leave", None, LeaveOptions, "Leave a Magic Folder."],
["status", None, StatusOptions, "Display stutus of uploads/downloads."],
]
optFlags = [
["debug", "d", "Print full stack-traces"],
]
def postOptions(self):
if not hasattr(self, 'subOptions'):
raise usage.UsageError("must specify a subcommand")
def getSynopsis(self):
return "Usage: tahoe [global-options] magic SUBCOMMAND"
def getUsage(self, width=None):
t = BaseOptions.getUsage(self, width)
t += """\
Please run e.g. 'tahoe magic-folder create --help' for more details on each
subcommand.
"""
return t
subDispatch = {
"create": create,
"invite": invite,
"join": join,
"leave": leave,
"status": status,
}
def do_magic_folder(options):
so = options.subOptions
so.stdout = options.stdout
so.stderr = options.stderr
f = subDispatch[options.subCommand]
try:
return f(so)
except Exception as e:
print("Error: %s" % (e,))
if options['debug']:
raise
subCommands = [
["magic-folder", None, MagicFolderCommand,
"Magic Folder subcommands: use 'tahoe magic-folder' for a list."],
]
dispatch = {
"magic-folder": do_magic_folder,
}

View File

@ -5,7 +5,8 @@ from cStringIO import StringIO
from twisted.python import usage
from allmydata.scripts.common import get_default_nodedir
from allmydata.scripts import debug, create_node, startstop_node, cli, stats_gatherer, admin
from allmydata.scripts import debug, create_node, startstop_node, cli, \
stats_gatherer, admin, magic_folder_cli
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
def GROUP(s):
@ -44,6 +45,7 @@ class Options(usage.Options):
+ debug.subCommands
+ GROUP("Using the filesystem")
+ cli.subCommands
+ magic_folder_cli.subCommands
)
optFlags = [
@ -144,6 +146,8 @@ def runner(argv,
rc = admin.dispatch[command](so)
elif command in cli.dispatch:
rc = cli.dispatch[command](so)
elif command in magic_folder_cli.dispatch:
rc = magic_folder_cli.dispatch[command](so)
elif command in ac_dispatch:
rc = ac_dispatch[command](so, stdout, stderr)
else:

View File

@ -0,0 +1,432 @@
#!/usr/bin/env python
# this is a smoke-test using "./bin/tahoe" to:
#
# 1. create an introducer
# 2. create 5 storage nodes
# 3. create 2 client nodes (alice, bob)
# 4. Alice creates a magic-folder ("magik:")
# 5. Alice invites Bob
# 6. Bob joins
#
# After that, some basic tests are performed; see the "if True:"
# blocks to turn some on or off. Could benefit from some cleanups
# etc. but this seems useful out of the gate for quick testing.
#
# TO RUN:
# from top-level of your checkout (we use "./bin/tahoe"):
# python src/allmydata/test/check_magicfolder_smoke.py
#
# This will create "./smoke_magicfolder" (which is disposable) and
# contains all the Tahoe basedirs for the introducer, storage nodes,
# clients, and the clients' magic-folders. NOTE that if these
# directories already exist they will NOT be re-created. So kill the
# grid and then "rm -rf smoke_magicfolder" if you want to re-run the
# tests cleanly.
#
# Run the script with a single arg, "kill" to run "tahoe stop" on all
# the nodes.
#
# This will have "tahoe start" -ed all the nodes, so you can continue
# to play around after the script exits.
from __future__ import print_function
import sys
import time
import shutil
import subprocess
from os.path import join, abspath, curdir, exists
from os import mkdir, listdir, unlink
is_windows = (sys.platform == 'win32')
tahoe_base = abspath(curdir)
data_base = join(tahoe_base, 'smoke_magicfolder')
if is_windows:
tahoe_bin = 'tahoe.exe'
else:
tahoe_bin = 'tahoe'
python = sys.executable
if not exists(data_base):
print("Creating", data_base)
mkdir(data_base)
if 'kill' in sys.argv:
print("Killing the grid")
for d in listdir(data_base):
print("killing", d)
subprocess.call(
[
tahoe_bin, 'stop', join(data_base, d),
]
)
sys.exit(0)
if not exists(join(data_base, 'introducer')):
subprocess.check_call(
[
tahoe_bin, 'create-introducer', join(data_base, 'introducer'),
]
)
with open(join(data_base, 'introducer', 'tahoe.cfg'), 'w') as f:
f.write('''
[node]
nickname = introducer0
web.port = 4560
''')
if not is_windows:
subprocess.check_call(
[
tahoe_bin, 'start', join(data_base, 'introducer'),
]
)
else:
time.sleep(5)
intro = subprocess.Popen(
[
tahoe_bin, 'start', join(data_base, 'introducer'),
]
)
furl_fname = join(data_base, 'introducer', 'private', 'introducer.furl')
while not exists(furl_fname):
time.sleep(1)
furl = open(furl_fname, 'r').read()
print("FURL", furl)
nodes = []
for x in range(5):
data_dir = join(data_base, 'node%d' % x)
if not exists(data_dir):
subprocess.check_call(
[
tahoe_bin, 'create-node',
'--nickname', 'node%d' % (x,),
'--introducer', furl,
data_dir,
]
)
with open(join(data_dir, 'tahoe.cfg'), 'w') as f:
f.write('''
[node]
nickname = node%(node_id)s
web.port =
web.static = public_html
# tub.location = localhost:%(tub_port)d
[client]
# Which services should this client connect to?
introducer.furl = %(furl)s
shares.needed = 2
shares.happy = 3
shares.total = 4
''' % {'node_id':x, 'furl':furl, 'tub_port':(9900 + x)})
if not is_windows:
subprocess.check_call(
[
tahoe_bin, 'start', data_dir,
]
)
else:
time.sleep(5)
node = subprocess.Popen(
[
tahoe_bin, 'start', data_dir,
]
)
nodes.append(node)
# alice and bob clients
do_invites = False
node_id = 0
clients = []
for name in ['alice', 'bob']:
data_dir = join(data_base, name)
magic_dir = join(data_base, '%s-magic' % (name,))
try:
mkdir(magic_dir)
except Exception:
pass
if not exists(data_dir):
do_invites = True
subprocess.check_call(
[
tahoe_bin, 'create-node',
'--no-storage',
'--nickname', name,
'--introducer', furl,
data_dir,
]
)
with open(join(data_dir, 'tahoe.cfg'), 'w') as f:
f.write('''
[node]
nickname = %(name)s
web.port = tcp:998%(node_id)d:interface=localhost
web.static = public_html
[client]
# Which services should this client connect to?
introducer.furl = %(furl)s
shares.needed = 2
shares.happy = 3
shares.total = 4
''' % {'name':name, 'node_id':node_id, 'furl':furl})
if not is_windows:
subprocess.check_call(
[
tahoe_bin, 'start', data_dir,
]
)
else:
time.sleep(5)
x = subprocess.Popen(
[
tahoe_bin, 'start', data_dir,
]
)
clients.append(x)
node_id += 1
# okay, now we have alice + bob (alice, bob)
# now we have alice create a magic-folder, and invite bob to it
time.sleep(5)
if do_invites:
data_dir = join(data_base, 'alice')
# alice creates her folder, invites bob
print("Alice creates a magic-folder")
subprocess.check_call(
[
tahoe_bin, 'magic-folder', 'create', '--basedir', data_dir, 'magik:', 'alice',
join(data_base, 'alice-magic'),
]
)
print("Alice invites Bob")
invite = subprocess.check_output(
[
tahoe_bin, 'magic-folder', 'invite', '--basedir', data_dir, 'magik:', 'bob',
]
)
print(" invite:", invite)
# now we let "bob"/bob join
print("Bob joins Alice's magic folder")
data_dir = join(data_base, 'bob')
subprocess.check_call(
[
tahoe_bin, 'magic-folder', 'join', '--basedir', data_dir, invite,
join(data_base, 'bob-magic'),
]
)
print("Bob has joined.")
print("Restarting alice + bob clients")
if not is_windows:
subprocess.check_call(
[
tahoe_bin, 'restart', '--basedir', join(data_base, 'alice'),
]
)
subprocess.check_call(
[
tahoe_bin, 'restart', '--basedir', join(data_base, 'bob'),
]
)
else:
for x in clients:
x.terminate()
clients = []
a = subprocess.Popen(
[
tahoe_bin, 'start', '--basedir', join(data_base, 'alice'),
]
)
b = subprocess.Popen(
[
tahoe_bin, 'start', '--basedir', join(data_base, 'bob'),
]
)
clients.append(a)
clients.append(b)
if True:
for name in ['alice', 'bob']:
try:
with open(join(data_base, name, 'private', 'magic_folder_dircap'), 'r') as f:
print("dircap %s: %s" % (name, f.read().strip()))
except Exception:
print("can't find/open %s" % (name,))
# give storage nodes a chance to connect properly? I'm not entirely
# sure what's up here, but I get "UnrecoverableFileError" on the
# first_file upload from Alice "very often" otherwise
print("waiting 3 seconds")
time.sleep(3)
if True:
# alice writes a file; bob should get it
alice_foo = join(data_base, 'alice-magic', 'first_file')
bob_foo = join(data_base, 'bob-magic', 'first_file')
with open(alice_foo, 'w') as f:
f.write("line one\n")
print("Waiting for:", bob_foo)
while True:
if exists(bob_foo):
print(" found", bob_foo)
with open(bob_foo, 'r') as f:
if f.read() == "line one\n":
break
print(" file contents still mismatched")
time.sleep(1)
if True:
# bob writes a file; alice should get it
alice_bar = join(data_base, 'alice-magic', 'second_file')
bob_bar = join(data_base, 'bob-magic', 'second_file')
with open(bob_bar, 'w') as f:
f.write("line one\n")
print("Waiting for:", alice_bar)
while True:
if exists(bob_bar):
print(" found", bob_bar)
with open(bob_bar, 'r') as f:
if f.read() == "line one\n":
break
print(" file contents still mismatched")
time.sleep(1)
if True:
# alice deletes 'first_file'
alice_foo = join(data_base, 'alice-magic', 'first_file')
bob_foo = join(data_base, 'bob-magic', 'first_file')
unlink(alice_foo)
print("Waiting for '%s' to disappear" % (bob_foo,))
while True:
if not exists(bob_foo):
print(" disappeared", bob_foo)
break
time.sleep(1)
bob_tmp = bob_foo + '.backup'
print("Waiting for '%s' to appear" % (bob_tmp,))
while True:
if exists(bob_tmp):
print(" appeared", bob_tmp)
break
time.sleep(1)
if True:
# bob writes new content to 'second_file'; alice should get it
# get it.
alice_foo = join(data_base, 'alice-magic', 'second_file')
bob_foo = join(data_base, 'bob-magic', 'second_file')
gold_content = "line one\nsecond line\n"
with open(bob_foo, 'w') as f:
f.write(gold_content)
print("Waiting for:", alice_foo)
while True:
if exists(alice_foo):
print(" found", alice_foo)
with open(alice_foo, 'r') as f:
content = f.read()
if content == gold_content:
break
print(" file contents still mismatched:\n")
print(content)
time.sleep(1)
if True:
# bob creates a sub-directory and adds a file to it
alice_dir = join(data_base, 'alice-magic', 'subdir')
bob_dir = join(data_base, 'alice-magic', 'subdir')
gold_content = 'a file in a subdirectory\n'
mkdir(bob_dir)
with open(join(bob_dir, 'subfile'), 'w') as f:
f.write(gold_content)
print("Waiting for Bob's subdir '%s' to appear" % (bob_dir,))
while True:
if exists(bob_dir):
print(" found subdir")
if exists(join(bob_dir, 'subfile')):
print(" found file")
with open(join(bob_dir, 'subfile'), 'r') as f:
if f.read() == gold_content:
print(" contents match")
break
time.sleep(0.1)
if True:
# bob deletes the whole subdir
alice_dir = join(data_base, 'alice-magic', 'subdir')
bob_dir = join(data_base, 'alice-magic', 'subdir')
shutil.rmtree(bob_dir)
print("Waiting for Alice's subdir '%s' to disappear" % (alice_dir,))
while True:
if not exists(alice_dir):
print(" it's gone")
break
time.sleep(0.1)
# XXX restore the file not working (but, unit-tests work; what's wrong with them?)
# NOTE: only not-works if it's alice restoring the file!
if True:
# restore 'first_file' but with different contents
print("re-writing 'first_file'")
assert not exists(join(data_base, 'bob-magic', 'first_file'))
assert not exists(join(data_base, 'alice-magic', 'first_file'))
alice_foo = join(data_base, 'alice-magic', 'first_file')
bob_foo = join(data_base, 'bob-magic', 'first_file')
if True:
# if we don't swap around, it works fine
alice_foo, bob_foo = bob_foo, alice_foo
gold_content = "see it again for the first time\n"
with open(bob_foo, 'w') as f:
f.write(gold_content)
print("Waiting for:", alice_foo)
while True:
if exists(alice_foo):
print(" found", alice_foo)
with open(alice_foo, 'r') as f:
content = f.read()
if content == gold_content:
break
print(" file contents still mismatched: %d bytes:\n" % (len(content),))
print(content)
else:
print(" %r not there yet" % (alice_foo,))
time.sleep(1)
if True:
# bob leaves
print('bob leaves')
data_dir = join(data_base, 'bob')
subprocess.check_call(
[
tahoe_bin, 'magic-folder', 'leave', '--basedir', data_dir,
]
)
# XXX test .backup (delete a file)
# port david's clock.advance stuff
# fix clock.advance()
# subdirectory
# file deletes
# conflicts

View File

@ -20,6 +20,9 @@ from twisted.internet import defer, reactor
from twisted.python.failure import Failure
from foolscap.api import Referenceable, fireEventually, RemoteException
from base64 import b32encode
from allmydata.util.assertutil import _assert
from allmydata import uri as tahoe_uri
from allmydata.client import Client
from allmydata.storage.server import StorageServer, storage_index_to_dir
@ -172,8 +175,11 @@ class NoNetworkStorageBroker:
return self.client._servers
def get_nickname_for_serverid(self, serverid):
return None
def on_servers_changed(self, cb):
pass
class NoNetworkClient(Client):
def create_tub(self):
pass
def init_introducer_client(self):
@ -230,6 +236,7 @@ class NoNetworkGrid(service.MultiService):
self.proxies_by_id = {} # maps to IServer on which .rref is a wrapped
# StorageServer
self.clients = []
self.client_config_hooks = client_config_hooks
for i in range(num_servers):
ss = self.make_server(i)
@ -237,30 +244,42 @@ class NoNetworkGrid(service.MultiService):
self.rebuild_serverlist()
for i in range(num_clients):
clientid = hashutil.tagged_hash("clientid", str(i))[:20]
clientdir = os.path.join(basedir, "clients",
idlib.shortnodeid_b2a(clientid))
fileutil.make_dirs(clientdir)
f = open(os.path.join(clientdir, "tahoe.cfg"), "w")
c = self.make_client(i)
self.clients.append(c)
def make_client(self, i, write_config=True):
clientid = hashutil.tagged_hash("clientid", str(i))[:20]
clientdir = os.path.join(self.basedir, "clients",
idlib.shortnodeid_b2a(clientid))
fileutil.make_dirs(clientdir)
tahoe_cfg_path = os.path.join(clientdir, "tahoe.cfg")
if write_config:
f = open(tahoe_cfg_path, "w")
f.write("[node]\n")
f.write("nickname = client-%d\n" % i)
f.write("web.port = tcp:0:interface=127.0.0.1\n")
f.write("[storage]\n")
f.write("enabled = false\n")
f.close()
c = None
if i in client_config_hooks:
# this hook can either modify tahoe.cfg, or return an
# entirely new Client instance
c = client_config_hooks[i](clientdir)
if not c:
c = NoNetworkClient(clientdir)
c.set_default_mutable_keysize(TEST_RSA_KEY_SIZE)
c.nodeid = clientid
c.short_nodeid = b32encode(clientid).lower()[:8]
c._servers = self.all_servers # can be updated later
c.setServiceParent(self)
self.clients.append(c)
else:
_assert(os.path.exists(tahoe_cfg_path), tahoe_cfg_path=tahoe_cfg_path)
c = None
if i in self.client_config_hooks:
# this hook can either modify tahoe.cfg, or return an
# entirely new Client instance
c = self.client_config_hooks[i](clientdir)
if not c:
c = NoNetworkClient(clientdir)
c.set_default_mutable_keysize(TEST_RSA_KEY_SIZE)
c.nodeid = clientid
c.short_nodeid = b32encode(clientid).lower()[:8]
c._servers = self.all_servers # can be updated later
c.setServiceParent(self)
return c
def make_server(self, i, readonly=False):
serverid = hashutil.tagged_hash("serverid", str(i))[:20]
@ -348,6 +367,9 @@ class GridTestMixin:
num_servers=num_servers,
client_config_hooks=client_config_hooks)
self.g.setServiceParent(self.s)
self._record_webports_and_baseurls()
def _record_webports_and_baseurls(self):
self.client_webports = [c.getServiceNamed("webish").getPortnum()
for c in self.g.clients]
self.client_baseurls = [c.getServiceNamed("webish").getURL()
@ -356,6 +378,23 @@ class GridTestMixin:
def get_clientdir(self, i=0):
return self.g.clients[i].basedir
def set_clientdir(self, basedir, i=0):
self.g.clients[i].basedir = basedir
def get_client(self, i=0):
return self.g.clients[i]
def restart_client(self, i=0):
client = self.g.clients[i]
d = defer.succeed(None)
d.addCallback(lambda ign: self.g.removeService(client))
def _make_client(ign):
c = self.g.make_client(i, write_config=False)
self.g.clients[i] = c
self._record_webports_and_baseurls()
d.addCallback(_make_client)
return d
def get_serverdir(self, i):
return self.g.servers_by_number[i].storedir

View File

@ -0,0 +1,371 @@
import os.path
import re
from twisted.trial import unittest
from twisted.internet import defer
from twisted.internet import reactor
from twisted.python import usage
from allmydata.util.assertutil import precondition
from allmydata.util import fileutil
from allmydata.scripts.common import get_aliases
from allmydata.test.no_network import GridTestMixin
from .test_cli import CLITestMixin
from allmydata.scripts import magic_folder_cli
from allmydata.util.fileutil import abspath_expanduser_unicode
from allmydata.util.encodingutil import unicode_to_argv
from allmydata.frontends.magic_folder import MagicFolder
from allmydata import uri
class MagicFolderCLITestMixin(CLITestMixin, GridTestMixin):
def do_create_magic_folder(self, client_num):
d = self.do_cli("magic-folder", "create", "magic:", client_num=client_num)
def _done((rc,stdout,stderr)):
self.failUnlessEqual(rc, 0, stdout + stderr)
self.failUnlessIn("Alias 'magic' created", stdout)
self.failUnlessEqual(stderr, "")
aliases = get_aliases(self.get_clientdir(i=client_num))
self.failUnlessIn("magic", aliases)
self.failUnless(aliases["magic"].startswith("URI:DIR2:"))
d.addCallback(_done)
return d
def do_invite(self, client_num, nickname):
nickname_arg = unicode_to_argv(nickname)
d = self.do_cli("magic-folder", "invite", "magic:", nickname_arg, client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
return (rc, stdout, stderr)
d.addCallback(_done)
return d
def do_join(self, client_num, local_dir, invite_code):
precondition(isinstance(local_dir, unicode), local_dir=local_dir)
precondition(isinstance(invite_code, str), invite_code=invite_code)
local_dir_arg = unicode_to_argv(local_dir)
d = self.do_cli("magic-folder", "join", invite_code, local_dir_arg, client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
self.failUnlessEqual(stdout, "")
self.failUnlessEqual(stderr, "")
return (rc, stdout, stderr)
d.addCallback(_done)
return d
def do_leave(self, client_num):
d = self.do_cli("magic-folder", "leave", client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
return (rc, stdout, stderr)
d.addCallback(_done)
return d
def check_joined_config(self, client_num, upload_dircap):
"""Tests that our collective directory has the readonly cap of
our upload directory.
"""
collective_readonly_cap = fileutil.read(os.path.join(self.get_clientdir(i=client_num),
u"private", u"collective_dircap"))
d = self.do_cli("ls", "--json", collective_readonly_cap, client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
return (rc, stdout, stderr)
d.addCallback(_done)
def test_joined_magic_folder((rc,stdout,stderr)):
readonly_cap = unicode(uri.from_string(upload_dircap).get_readonly().to_string(), 'utf-8')
s = re.search(readonly_cap, stdout)
self.failUnless(s is not None)
return None
d.addCallback(test_joined_magic_folder)
return d
def get_caps_from_files(self, client_num):
collective_dircap = fileutil.read(os.path.join(self.get_clientdir(i=client_num),
u"private", u"collective_dircap"))
upload_dircap = fileutil.read(os.path.join(self.get_clientdir(i=client_num),
u"private", u"magic_folder_dircap"))
self.failIf(collective_dircap is None or upload_dircap is None)
return collective_dircap, upload_dircap
def check_config(self, client_num, local_dir):
client_config = fileutil.read(os.path.join(self.get_clientdir(i=client_num), "tahoe.cfg"))
local_dir_utf8 = local_dir.encode('utf-8')
magic_folder_config = "[magic_folder]\nenabled = True\nlocal.directory = %s" % (local_dir_utf8,)
self.failUnlessIn(magic_folder_config, client_config)
def create_invite_join_magic_folder(self, nickname, local_dir):
nickname_arg = unicode_to_argv(nickname)
local_dir_arg = unicode_to_argv(local_dir)
d = self.do_cli("magic-folder", "create", "magic:", nickname_arg, local_dir_arg)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0, stdout + stderr)
client = self.get_client()
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
self.collective_dirnode = client.create_node_from_uri(self.collective_dircap)
self.upload_dirnode = client.create_node_from_uri(self.upload_dircap)
d.addCallback(_done)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, local_dir))
return d
# XXX should probably just be "tearDown"...
def cleanup(self, res):
d = defer.succeed(None)
def _clean(ign):
d = self.magicfolder.finish()
self.magicfolder.uploader._clock.advance(self.magicfolder.uploader.scan_interval + 1)
self.magicfolder.downloader._clock.advance(self.magicfolder.downloader.scan_interval + 1)
return d
d.addCallback(_clean)
d.addCallback(lambda ign: res)
return d
def init_magicfolder(self, client_num, upload_dircap, collective_dircap, local_magic_dir, clock):
dbfile = abspath_expanduser_unicode(u"magicfolderdb.sqlite", base=self.get_clientdir(i=client_num))
magicfolder = MagicFolder(self.get_client(client_num), upload_dircap, collective_dircap, local_magic_dir,
dbfile, 0077, pending_delay=0.2, clock=clock)
magicfolder.downloader._turn_delay = 0
magicfolder.setServiceParent(self.get_client(client_num))
magicfolder.ready()
return magicfolder
def setup_alice_and_bob(self, alice_clock=reactor, bob_clock=reactor):
self.set_up_grid(num_clients=2)
self.alice_magicfolder = None
self.bob_magicfolder = None
alice_magic_dir = abspath_expanduser_unicode(u"Alice-magic", base=self.basedir)
self.mkdir_nonascii(alice_magic_dir)
bob_magic_dir = abspath_expanduser_unicode(u"Bob-magic", base=self.basedir)
self.mkdir_nonascii(bob_magic_dir)
# Alice creates a Magic Folder,
# invites herself then and joins.
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice\u00F8"))
def get_invite_code(result):
self.invite_code = result[1].strip()
d.addCallback(get_invite_code)
d.addCallback(lambda ign: self.do_join(0, alice_magic_dir, self.invite_code))
def get_alice_caps(ign):
self.alice_collective_dircap, self.alice_upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_alice_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.alice_upload_dircap))
d.addCallback(lambda ign: self.check_config(0, alice_magic_dir))
def get_Alice_magicfolder(result):
self.alice_magicfolder = self.init_magicfolder(0, self.alice_upload_dircap,
self.alice_collective_dircap,
alice_magic_dir, alice_clock)
return result
d.addCallback(get_Alice_magicfolder)
# Alice invites Bob. Bob joins.
d.addCallback(lambda ign: self.do_invite(0, u"Bob\u00F8"))
def get_invite_code(result):
self.invite_code = result[1].strip()
d.addCallback(get_invite_code)
d.addCallback(lambda ign: self.do_join(1, bob_magic_dir, self.invite_code))
def get_bob_caps(ign):
self.bob_collective_dircap, self.bob_upload_dircap = self.get_caps_from_files(1)
d.addCallback(get_bob_caps)
d.addCallback(lambda ign: self.check_joined_config(1, self.bob_upload_dircap))
d.addCallback(lambda ign: self.check_config(1, bob_magic_dir))
def get_Bob_magicfolder(result):
self.bob_magicfolder = self.init_magicfolder(1, self.bob_upload_dircap,
self.bob_collective_dircap,
bob_magic_dir, bob_clock)
return result
d.addCallback(get_Bob_magicfolder)
return d
class CreateMagicFolder(MagicFolderCLITestMixin, unittest.TestCase):
def test_create_and_then_invite_join(self):
self.basedir = "cli/MagicFolder/create-and-then-invite-join"
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice"))
def get_invite_code_and_join((rc, stdout, stderr)):
invite_code = stdout.strip()
return self.do_join(0, unicode(local_dir), invite_code)
d.addCallback(get_invite_code_and_join)
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
return d
def test_create_error(self):
self.basedir = "cli/MagicFolder/create-error"
self.set_up_grid()
d = self.do_cli("magic-folder", "create", "m a g i c:", client_num=0)
def _done((rc, stdout, stderr)):
self.failIfEqual(rc, 0)
self.failUnlessIn("Alias names cannot contain spaces.", stderr)
d.addCallback(_done)
return d
def test_create_invite_join(self):
self.basedir = "cli/MagicFolder/create-invite-join"
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
d = self.do_cli("magic-folder", "create", "magic:", "Alice", local_dir)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(_done)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
return d
def test_create_invite_join_failure(self):
self.basedir = "cli/MagicFolder/create-invite-join-failure"
os.makedirs(self.basedir)
o = magic_folder_cli.CreateOptions()
o.parent = magic_folder_cli.MagicFolderCommand()
o.parent['node-directory'] = self.basedir
try:
o.parseArgs("magic:", "Alice", "-foo")
except usage.UsageError as e:
self.failUnlessIn("cannot start with '-'", str(e))
else:
self.fail("expected UsageError")
def test_join_failure(self):
self.basedir = "cli/MagicFolder/create-join-failure"
os.makedirs(self.basedir)
o = magic_folder_cli.JoinOptions()
o.parent = magic_folder_cli.MagicFolderCommand()
o.parent['node-directory'] = self.basedir
try:
o.parseArgs("URI:invite+URI:code", "-foo")
except usage.UsageError as e:
self.failUnlessIn("cannot start with '-'", str(e))
else:
self.fail("expected UsageError")
def test_join_twice_failure(self):
self.basedir = "cli/MagicFolder/create-join-twice-failure"
os.makedirs(self.basedir)
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice"))
def get_invite_code_and_join((rc, stdout, stderr)):
self.invite_code = stdout.strip()
return self.do_join(0, unicode(local_dir), self.invite_code)
d.addCallback(get_invite_code_and_join)
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
def join_again(ignore):
return self.do_cli("magic-folder", "join", self.invite_code, local_dir, client_num=0)
d.addCallback(join_again)
def get_results(result):
(rc, out, err) = result
self.failUnlessEqual(out, "")
self.failUnlessIn("This client has already joined a magic folder.", err)
self.failUnlessIn("Use the 'tahoe magic-folder leave' command first.", err)
self.failIfEqual(rc, 0)
d.addCallback(get_results)
return d
def test_join_leave_join(self):
self.basedir = "cli/MagicFolder/create-join-leave-join"
os.makedirs(self.basedir)
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
self.invite_code = None
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice"))
def get_invite_code_and_join((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
self.invite_code = stdout.strip()
return self.do_join(0, unicode(local_dir), self.invite_code)
d.addCallback(get_invite_code_and_join)
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
d.addCallback(lambda ign: self.do_leave(0))
d.addCallback(lambda ign: self.do_join(0, unicode(local_dir), self.invite_code))
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
return d
def test_join_failures(self):
self.basedir = "cli/MagicFolder/create-join-failures"
os.makedirs(self.basedir)
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
self.invite_code = None
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice"))
def get_invite_code_and_join((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
self.invite_code = stdout.strip()
return self.do_join(0, unicode(local_dir), self.invite_code)
d.addCallback(get_invite_code_and_join)
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
def check_success(result):
(rc, out, err) = result
self.failUnlessEqual(rc, 0)
def check_failure(result):
(rc, out, err) = result
self.failIfEqual(rc, 0)
def leave(ign):
return self.do_cli("magic-folder", "leave", client_num=0)
d.addCallback(leave)
d.addCallback(check_success)
collective_dircap_file = os.path.join(self.get_clientdir(i=0), u"private", u"collective_dircap")
upload_dircap = os.path.join(self.get_clientdir(i=0), u"private", u"magic_folder_dircap")
magic_folder_db_file = os.path.join(self.get_clientdir(i=0), u"private", u"magicfolderdb.sqlite")
def check_join_if_file(my_file):
fileutil.write(my_file, "my file data")
d2 = self.do_cli("magic-folder", "join", self.invite_code, local_dir, client_num=0)
d2.addCallback(check_failure)
return d2
for my_file in [collective_dircap_file, upload_dircap, magic_folder_db_file]:
d.addCallback(lambda ign, my_file: check_join_if_file(my_file), my_file)
d.addCallback(leave)
d.addCallback(check_success)
return d

View File

@ -4,7 +4,7 @@ from twisted.trial import unittest
from twisted.application import service
import allmydata
import allmydata.frontends.drop_upload
import allmydata.frontends.magic_folder
import allmydata.util.log
from allmydata.node import Node, OldConfigError, OldConfigOptionError, MissingConfigEntry, UnescapedHashError
@ -26,7 +26,7 @@ BASECONFIG_I = ("[client]\n"
"introducer.furl = %s\n"
)
class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
class Basic(testutil.ReallyEqualMixin, testutil.NonASCIIPathMixin, unittest.TestCase):
def test_loadable(self):
basedir = "test_client.Basic.test_loadable"
os.mkdir(basedir)
@ -248,7 +248,7 @@ class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(self._permute(sb, "one"), [])
def test_permute_with_preferred(self):
sb = StorageFarmBroker(True, ['1','4'])
sb = StorageFarmBroker(True, preferred_peers=['1','4'])
for k in ["%d" % i for i in range(5)]:
ann = {"anonymous-storage-FURL": "pb://abcde@nowhere/fake",
"permutation-seed-base32": base32.b2a(k) }
@ -299,76 +299,80 @@ class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
_check("helper.furl = None", None)
_check("helper.furl = pb://blah\n", "pb://blah")
def test_create_drop_uploader(self):
class MockDropUploader(service.MultiService):
name = 'drop-upload'
def test_create_magic_folder_service(self):
class MockMagicFolder(service.MultiService):
name = 'magic-folder'
def __init__(self, client, upload_dircap, local_dir_utf8, inotify=None):
def __init__(self, client, upload_dircap, collective_dircap, local_dir, dbfile, umask, inotify=None,
pending_delay=1.0):
service.MultiService.__init__(self)
self.client = client
self._umask = umask
self.upload_dircap = upload_dircap
self.local_dir_utf8 = local_dir_utf8
self.collective_dircap = collective_dircap
self.local_dir = local_dir
self.dbfile = dbfile
self.inotify = inotify
self.patch(allmydata.frontends.drop_upload, 'DropUploader', MockDropUploader)
def ready(self):
pass
self.patch(allmydata.frontends.magic_folder, 'MagicFolder', MockMagicFolder)
upload_dircap = "URI:DIR2:blah"
local_dir_utf8 = u"loc\u0101l_dir".encode('utf-8')
local_dir_u = self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir")
local_dir_utf8 = local_dir_u.encode('utf-8')
config = (BASECONFIG +
"[storage]\n" +
"enabled = false\n" +
"[drop_upload]\n" +
"[magic_folder]\n" +
"enabled = true\n")
basedir1 = "test_client.Basic.test_create_drop_uploader1"
basedir1 = "test_client.Basic.test_create_magic_folder_service1"
os.mkdir(basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "local.directory = " + local_dir_utf8 + "\n")
self.failUnlessRaises(MissingConfigEntry, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"), config)
fileutil.write(os.path.join(basedir1, "private", "drop_upload_dircap"), "URI:DIR2:blah")
fileutil.write(os.path.join(basedir1, "private", "magic_folder_dircap"), "URI:DIR2:blah")
fileutil.write(os.path.join(basedir1, "private", "collective_dircap"), "URI:DIR2:meow")
self.failUnlessRaises(MissingConfigEntry, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "upload.dircap = " + upload_dircap + "\n")
config.replace("[magic_folder]\n", "[drop_upload]\n"))
self.failUnlessRaises(OldConfigOptionError, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "local.directory = " + local_dir_utf8 + "\n")
c1 = client.Client(basedir1)
uploader = c1.getServiceNamed('drop-upload')
self.failUnless(isinstance(uploader, MockDropUploader), uploader)
self.failUnlessReallyEqual(uploader.client, c1)
self.failUnlessReallyEqual(uploader.upload_dircap, upload_dircap)
self.failUnlessReallyEqual(uploader.local_dir_utf8, local_dir_utf8)
self.failUnless(uploader.inotify is None, uploader.inotify)
self.failUnless(uploader.running)
magicfolder = c1.getServiceNamed('magic-folder')
self.failUnless(isinstance(magicfolder, MockMagicFolder), magicfolder)
self.failUnlessReallyEqual(magicfolder.client, c1)
self.failUnlessReallyEqual(magicfolder.upload_dircap, upload_dircap)
self.failUnlessReallyEqual(os.path.basename(magicfolder.local_dir), local_dir_u)
self.failUnless(magicfolder.inotify is None, magicfolder.inotify)
self.failUnless(magicfolder.running)
class Boom(Exception):
pass
def BoomDropUploader(client, upload_dircap, local_dir_utf8, inotify=None):
def BoomMagicFolder(client, upload_dircap, collective_dircap, local_dir, dbfile,
inotify=None, pending_delay=1.0):
raise Boom()
self.patch(allmydata.frontends.magic_folder, 'MagicFolder', BoomMagicFolder)
logged_messages = []
def mock_log(*args, **kwargs):
logged_messages.append("%r %r" % (args, kwargs))
self.patch(allmydata.util.log, 'msg', mock_log)
self.patch(allmydata.frontends.drop_upload, 'DropUploader', BoomDropUploader)
basedir2 = "test_client.Basic.test_create_drop_uploader2"
basedir2 = "test_client.Basic.test_create_magic_folder_service2"
os.mkdir(basedir2)
os.mkdir(os.path.join(basedir2, "private"))
fileutil.write(os.path.join(basedir2, "tahoe.cfg"),
BASECONFIG +
"[drop_upload]\n" +
"[magic_folder]\n" +
"enabled = true\n" +
"local.directory = " + local_dir_utf8 + "\n")
fileutil.write(os.path.join(basedir2, "private", "drop_upload_dircap"), "URI:DIR2:blah")
c2 = client.Client(basedir2)
self.failUnlessRaises(KeyError, c2.getServiceNamed, 'drop-upload')
self.failUnless([True for arg in logged_messages if "Boom" in arg],
logged_messages)
fileutil.write(os.path.join(basedir2, "private", "magic_folder_dircap"), "URI:DIR2:blah")
fileutil.write(os.path.join(basedir2, "private", "collective_dircap"), "URI:DIR2:meow")
self.failUnlessRaises(Boom, client.Client, basedir2)
def flush_but_dont_ignore(res):

View File

@ -1,181 +0,0 @@
import os, sys
from twisted.trial import unittest
from twisted.python import filepath, runtime
from twisted.internet import defer
from allmydata.interfaces import IDirectoryNode, NoSuchChildError
from allmydata.util import fake_inotify
from allmydata.util.encodingutil import get_filesystem_encoding
from allmydata.util.consumer import download_to_data
from allmydata.test.no_network import GridTestMixin
from allmydata.test.common_util import ReallyEqualMixin, NonASCIIPathMixin
from allmydata.test.common import ShouldFailMixin
from allmydata.frontends.drop_upload import DropUploader
class DropUploadTestMixin(GridTestMixin, ShouldFailMixin, ReallyEqualMixin, NonASCIIPathMixin):
"""
These tests will be run both with a mock notifier, and (on platforms that support it)
with the real INotify.
"""
def _get_count(self, name):
return self.stats_provider.get_stats()["counters"].get(name, 0)
def _test(self):
self.uploader = None
self.set_up_grid()
self.local_dir = os.path.join(self.basedir, self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir"))
self.mkdir_nonascii(self.local_dir)
self.client = self.g.clients[0]
self.stats_provider = self.client.stats_provider
d = self.client.create_dirnode()
def _made_upload_dir(n):
self.failUnless(IDirectoryNode.providedBy(n))
self.upload_dirnode = n
self.upload_dircap = n.get_uri()
self.uploader = DropUploader(self.client, self.upload_dircap, self.local_dir.encode('utf-8'),
inotify=self.inotify)
return self.uploader.startService()
d.addCallback(_made_upload_dir)
# Write something short enough for a LIT file.
d.addCallback(lambda ign: self._test_file(u"short", "test"))
# Write to the same file again with different data.
d.addCallback(lambda ign: self._test_file(u"short", "different"))
# Test that temporary files are not uploaded.
d.addCallback(lambda ign: self._test_file(u"tempfile", "test", temporary=True))
# Test that we tolerate creation of a subdirectory.
d.addCallback(lambda ign: os.mkdir(os.path.join(self.local_dir, u"directory")))
# Write something longer, and also try to test a Unicode name if the fs can represent it.
name_u = self.unicode_or_fallback(u"l\u00F8ng", u"long")
d.addCallback(lambda ign: self._test_file(name_u, "test"*100))
# TODO: test that causes an upload failure.
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_failed'), 0))
# Prevent unclean reactor errors.
def _cleanup(res):
d = defer.succeed(None)
if self.uploader is not None:
d.addCallback(lambda ign: self.uploader.finish(for_tests=True))
d.addCallback(lambda ign: res)
return d
d.addBoth(_cleanup)
return d
def _test_file(self, name_u, data, temporary=False):
previously_uploaded = self._get_count('drop_upload.files_uploaded')
previously_disappeared = self._get_count('drop_upload.files_disappeared')
d = defer.Deferred()
# Note: this relies on the fact that we only get one IN_CLOSE_WRITE notification per file
# (otherwise we would get a defer.AlreadyCalledError). Should we be relying on that?
self.uploader.set_uploaded_callback(d.callback)
path_u = os.path.join(self.local_dir, name_u)
if sys.platform == "win32":
path = filepath.FilePath(path_u)
else:
path = filepath.FilePath(path_u.encode(get_filesystem_encoding()))
# We don't use FilePath.setContent() here because it creates a temporary file that
# is renamed into place, which causes events that the test is not expecting.
f = open(path.path, "wb")
try:
if temporary and sys.platform != "win32":
os.unlink(path.path)
f.write(data)
finally:
f.close()
if temporary and sys.platform == "win32":
os.unlink(path.path)
self.notify_close_write(path)
if temporary:
d.addCallback(lambda ign: self.shouldFail(NoSuchChildError, 'temp file not uploaded', None,
self.upload_dirnode.get, name_u))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_disappeared'),
previously_disappeared + 1))
else:
d.addCallback(lambda ign: self.upload_dirnode.get(name_u))
d.addCallback(download_to_data)
d.addCallback(lambda actual_data: self.failUnlessReallyEqual(actual_data, data))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_uploaded'),
previously_uploaded + 1))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_queued'), 0))
return d
class MockTest(DropUploadTestMixin, unittest.TestCase):
"""This can run on any platform, and even if twisted.internet.inotify can't be imported."""
def test_errors(self):
self.basedir = "drop_upload.MockTest.test_errors"
self.set_up_grid()
errors_dir = os.path.join(self.basedir, "errors_dir")
os.mkdir(errors_dir)
client = self.g.clients[0]
d = client.create_dirnode()
def _made_upload_dir(n):
self.failUnless(IDirectoryNode.providedBy(n))
upload_dircap = n.get_uri()
readonly_dircap = n.get_readonly_uri()
self.shouldFail(AssertionError, 'invalid local.directory', 'could not be represented',
DropUploader, client, upload_dircap, '\xFF', inotify=fake_inotify)
self.shouldFail(AssertionError, 'nonexistent local.directory', 'there is no directory',
DropUploader, client, upload_dircap, os.path.join(self.basedir, "Laputa"), inotify=fake_inotify)
fp = filepath.FilePath(self.basedir).child('NOT_A_DIR')
fp.touch()
self.shouldFail(AssertionError, 'non-directory local.directory', 'is not a directory',
DropUploader, client, upload_dircap, fp.path, inotify=fake_inotify)
self.shouldFail(AssertionError, 'bad upload.dircap', 'does not refer to a directory',
DropUploader, client, 'bad', errors_dir, inotify=fake_inotify)
self.shouldFail(AssertionError, 'non-directory upload.dircap', 'does not refer to a directory',
DropUploader, client, 'URI:LIT:foo', errors_dir, inotify=fake_inotify)
self.shouldFail(AssertionError, 'readonly upload.dircap', 'is not a writecap to a directory',
DropUploader, client, readonly_dircap, errors_dir, inotify=fake_inotify)
d.addCallback(_made_upload_dir)
return d
def test_drop_upload(self):
self.inotify = fake_inotify
self.basedir = "drop_upload.MockTest.test_drop_upload"
return self._test()
def notify_close_write(self, path):
self.uploader._notifier.event(path, self.inotify.IN_CLOSE_WRITE)
class RealTest(DropUploadTestMixin, unittest.TestCase):
"""This is skipped unless both Twisted and the platform support inotify."""
def test_drop_upload(self):
# We should always have runtime.platform.supportsINotify, because we're using
# Twisted >= 10.1.
if not runtime.platform.supportsINotify():
raise unittest.SkipTest("Drop-upload support can only be tested for-real on an OS that supports inotify or equivalent.")
self.inotify = None # use the appropriate inotify for the platform
self.basedir = "drop_upload.RealTest.test_drop_upload"
return self._test()
def notify_close_write(self, path):
# Writing to the file causes the notification.
pass

View File

@ -61,12 +61,15 @@ import os, sys, locale
from twisted.trial import unittest
from twisted.python.filepath import FilePath
from allmydata.test.common_util import ReallyEqualMixin
from allmydata.util import encodingutil, fileutil
from allmydata.util.encodingutil import argv_to_unicode, unicode_to_url, \
unicode_to_output, quote_output, quote_path, quote_local_unicode_path, \
unicode_platform, listdir_unicode, FilenameEncodingError, get_io_encoding, \
get_filesystem_encoding, to_str, from_utf8_or_none, _reload
quote_filepath, unicode_platform, listdir_unicode, FilenameEncodingError, \
get_io_encoding, get_filesystem_encoding, to_str, from_utf8_or_none, _reload, \
to_filepath, extend_filepath, unicode_from_filepath, unicode_segments_from
from allmydata.dirnode import normalize
from twisted.python import usage
@ -410,6 +413,9 @@ class QuoteOutput(ReallyEqualMixin, unittest.TestCase):
self.test_quote_output_utf8(None)
def win32_other(win32, other):
return win32 if sys.platform == "win32" else other
class QuotePaths(ReallyEqualMixin, unittest.TestCase):
def test_quote_path(self):
self.failUnlessReallyEqual(quote_path([u'foo', u'bar']), "'foo/bar'")
@ -419,9 +425,6 @@ class QuotePaths(ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(quote_path([u'foo', u'\nbar'], quotemarks=True), '"foo/\\x0abar"')
self.failUnlessReallyEqual(quote_path([u'foo', u'\nbar'], quotemarks=False), '"foo/\\x0abar"')
def win32_other(win32, other):
return win32 if sys.platform == "win32" else other
self.failUnlessReallyEqual(quote_local_unicode_path(u"\\\\?\\C:\\foo"),
win32_other("'C:\\foo'", "'\\\\?\\C:\\foo'"))
self.failUnlessReallyEqual(quote_local_unicode_path(u"\\\\?\\C:\\foo", quotemarks=True),
@ -435,6 +438,73 @@ class QuotePaths(ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(quote_local_unicode_path(u"\\\\?\\UNC\\foo\\bar", quotemarks=False),
win32_other("\\\\foo\\bar", "\\\\?\\UNC\\foo\\bar"))
def test_quote_filepath(self):
foo_bar_fp = FilePath(win32_other(u'C:\\foo\\bar', u'/foo/bar'))
self.failUnlessReallyEqual(quote_filepath(foo_bar_fp),
win32_other("'C:\\foo\\bar'", "'/foo/bar'"))
self.failUnlessReallyEqual(quote_filepath(foo_bar_fp, quotemarks=True),
win32_other("'C:\\foo\\bar'", "'/foo/bar'"))
self.failUnlessReallyEqual(quote_filepath(foo_bar_fp, quotemarks=False),
win32_other("C:\\foo\\bar", "/foo/bar"))
if sys.platform == "win32":
foo_longfp = FilePath(u'\\\\?\\C:\\foo')
self.failUnlessReallyEqual(quote_filepath(foo_longfp),
"'C:\\foo'")
self.failUnlessReallyEqual(quote_filepath(foo_longfp, quotemarks=True),
"'C:\\foo'")
self.failUnlessReallyEqual(quote_filepath(foo_longfp, quotemarks=False),
"C:\\foo")
class FilePaths(ReallyEqualMixin, unittest.TestCase):
def test_to_filepath(self):
foo_u = win32_other(u'C:\\foo', u'/foo')
nosep_fp = to_filepath(foo_u)
sep_fp = to_filepath(foo_u + os.path.sep)
for fp in (nosep_fp, sep_fp):
self.failUnlessReallyEqual(fp, FilePath(foo_u))
if encodingutil.use_unicode_filepath:
self.failUnlessReallyEqual(fp.path, foo_u)
if sys.platform == "win32":
long_u = u'\\\\?\\C:\\foo'
longfp = to_filepath(long_u + u'\\')
self.failUnlessReallyEqual(longfp, FilePath(long_u))
self.failUnlessReallyEqual(longfp.path, long_u)
def test_extend_filepath(self):
foo_bfp = FilePath(win32_other(b'C:\\foo', b'/foo'))
foo_ufp = FilePath(win32_other(u'C:\\foo', u'/foo'))
foo_bar_baz_u = win32_other(u'C:\\foo\\bar\\baz', u'/foo/bar/baz')
for foo_fp in (foo_bfp, foo_ufp):
fp = extend_filepath(foo_fp, [u'bar', u'baz'])
self.failUnlessReallyEqual(fp, FilePath(foo_bar_baz_u))
if encodingutil.use_unicode_filepath:
self.failUnlessReallyEqual(fp.path, foo_bar_baz_u)
def test_unicode_from_filepath(self):
foo_bfp = FilePath(win32_other(b'C:\\foo', b'/foo'))
foo_ufp = FilePath(win32_other(u'C:\\foo', u'/foo'))
foo_u = win32_other(u'C:\\foo', u'/foo')
for foo_fp in (foo_bfp, foo_ufp):
self.failUnlessReallyEqual(unicode_from_filepath(foo_fp), foo_u)
def test_unicode_segments_from(self):
foo_bfp = FilePath(win32_other(b'C:\\foo', b'/foo'))
foo_ufp = FilePath(win32_other(u'C:\\foo', u'/foo'))
foo_bar_baz_bfp = FilePath(win32_other(b'C:\\foo\\bar\\baz', b'/foo/bar/baz'))
foo_bar_baz_ufp = FilePath(win32_other(u'C:\\foo\\bar\\baz', u'/foo/bar/baz'))
for foo_fp in (foo_bfp, foo_ufp):
for foo_bar_baz_fp in (foo_bar_baz_bfp, foo_bar_baz_ufp):
self.failUnlessReallyEqual(unicode_segments_from(foo_bar_baz_fp, foo_fp),
[u'bar', u'baz'])
class UbuntuKarmicUTF8(EncodingUtil, unittest.TestCase):
uname = 'Linux korn 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:05:01 UTC 2009 x86_64'

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,28 @@
from twisted.trial import unittest
from allmydata import magicpath
class MagicPath(unittest.TestCase):
tests = {
u"Documents/work/critical-project/qed.txt": u"Documents@_work@_critical-project@_qed.txt",
u"Documents/emails/bunnyfufu@hoppingforest.net": u"Documents@_emails@_bunnyfufu@@hoppingforest.net",
u"foo/@/bar": u"foo@_@@@_bar",
}
def test_path2magic(self):
for test, expected in self.tests.items():
self.failUnlessEqual(magicpath.path2magic(test), expected)
def test_magic2path(self):
for expected, test in self.tests.items():
self.failUnlessEqual(magicpath.magic2path(test), expected)
def test_should_ignore(self):
self.failUnlessEqual(magicpath.should_ignore_file(u".bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"bashrc."), False)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/branch/.bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/.branch/bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/.tree/branch/bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/branch/bashrc"), False)

View File

@ -8,7 +8,8 @@ from twisted.internet import threads
from twisted.internet.defer import inlineCallbacks, returnValue
from allmydata.util import fileutil, pollmixin
from allmydata.util.encodingutil import unicode_to_argv, unicode_to_output, get_filesystem_encoding
from allmydata.util.encodingutil import unicode_to_argv, unicode_to_output, \
get_filesystem_encoding
from allmydata.scripts import runner
from allmydata.client import Client
from allmydata.test import common_util
@ -211,8 +212,6 @@ class CreateNode(unittest.TestCase):
self.failUnless(re.search(r"\n\[storage\]\n#.*\nenabled = true\n", content), content)
self.failUnless("\nreserved_space = 1G\n" in content)
self.failUnless(re.search(r"\n\[drop_upload\]\n#.*\nenabled = false\n", content), content)
# creating the node a second time should be rejected
rc, out, err = self.run_tahoe(argv)
self.failIfEqual(rc, 0, str((out, err, rc)))

View File

@ -478,6 +478,74 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
self.failIf(os.path.exists(fn))
self.failUnless(os.path.exists(fn2))
def test_rename_no_overwrite(self):
workdir = fileutil.abspath_expanduser_unicode(u"test_rename_no_overwrite")
fileutil.make_dirs(workdir)
source_path = os.path.join(workdir, "source")
dest_path = os.path.join(workdir, "dest")
# when neither file exists
self.failUnlessRaises(OSError, fileutil.rename_no_overwrite, source_path, dest_path)
# when only dest exists
fileutil.write(dest_path, "dest")
self.failUnlessRaises(OSError, fileutil.rename_no_overwrite, source_path, dest_path)
self.failUnlessEqual(fileutil.read(dest_path), "dest")
# when both exist
fileutil.write(source_path, "source")
self.failUnlessRaises(OSError, fileutil.rename_no_overwrite, source_path, dest_path)
self.failUnlessEqual(fileutil.read(source_path), "source")
self.failUnlessEqual(fileutil.read(dest_path), "dest")
# when only source exists
os.remove(dest_path)
fileutil.rename_no_overwrite(source_path, dest_path)
self.failUnlessEqual(fileutil.read(dest_path), "source")
self.failIf(os.path.exists(source_path))
def test_replace_file(self):
workdir = fileutil.abspath_expanduser_unicode(u"test_replace_file")
fileutil.make_dirs(workdir)
backup_path = os.path.join(workdir, "backup")
replaced_path = os.path.join(workdir, "replaced")
replacement_path = os.path.join(workdir, "replacement")
# when none of the files exist
self.failUnlessRaises(fileutil.ConflictError, fileutil.replace_file, replaced_path, replacement_path, backup_path)
# when only replaced exists
fileutil.write(replaced_path, "foo")
self.failUnlessRaises(fileutil.ConflictError, fileutil.replace_file, replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(replaced_path), "foo")
# when both replaced and replacement exist, but not backup
fileutil.write(replacement_path, "bar")
fileutil.replace_file(replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(backup_path), "foo")
self.failUnlessEqual(fileutil.read(replaced_path), "bar")
self.failIf(os.path.exists(replacement_path))
# when only replacement exists
os.remove(backup_path)
os.remove(replaced_path)
fileutil.write(replacement_path, "bar")
fileutil.replace_file(replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(replaced_path), "bar")
self.failIf(os.path.exists(replacement_path))
self.failIf(os.path.exists(backup_path))
# when replaced, replacement and backup all exist
fileutil.write(replaced_path, "foo")
fileutil.write(replacement_path, "bar")
fileutil.write(backup_path, "bak")
fileutil.replace_file(replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(backup_path), "foo")
self.failUnlessEqual(fileutil.read(replaced_path), "bar")
self.failIf(os.path.exists(replacement_path))
def test_du(self):
basedir = "util/FileUtil/test_du"
fileutil.make_dirs(basedir)
@ -581,7 +649,7 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
self.failUnlessEqual(new_mode, 0766)
new_mode = os.stat(os.path.join(workdir, "a", "b")).st_mode & 0777
self.failUnlessEqual(new_mode, 0766)
new_mode = os.stat(os.path.join(workdir,"a")).st_mode & 0777
new_mode = os.stat(os.path.join(workdir, "a")).st_mode & 0777
self.failUnlessEqual(new_mode, 0766)
new_mode = os.stat(workdir).st_mode & 0777
self.failIfEqual(new_mode, 0766)
@ -664,15 +732,15 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
# path at which nothing exists
dnename = os.path.join(basedir, "doesnotexist")
now = time.time()
dneinfo = fileutil.get_pathinfo(dnename, now=now)
now_ns = fileutil.seconds_to_ns(time.time())
dneinfo = fileutil.get_pathinfo(dnename, now_ns=now_ns)
self.failUnlessFalse(dneinfo.exists)
self.failUnlessFalse(dneinfo.isfile)
self.failUnlessFalse(dneinfo.isdir)
self.failUnlessFalse(dneinfo.islink)
self.failUnlessEqual(dneinfo.size, None)
self.failUnlessEqual(dneinfo.mtime, now)
self.failUnlessEqual(dneinfo.ctime, now)
self.failUnlessEqual(dneinfo.mtime_ns, now_ns)
self.failUnlessEqual(dneinfo.ctime_ns, now_ns)
def test_get_pathinfo_symlink(self):
if not hasattr(os, 'symlink'):

View File

@ -5931,7 +5931,10 @@ class FakeRequest(object):
class FakeField(object):
def __init__(self, *values):
self.value = list(values)
if len(values) == 1:
self.value = values[0]
else:
self.value = list(values)
class FakeClientWithToken(object):

View File

@ -6,8 +6,9 @@ unicode and back.
import sys, os, re, locale
from types import NoneType
from allmydata.util.assertutil import precondition
from allmydata.util.assertutil import precondition, _assert
from twisted.python import usage
from twisted.python.filepath import FilePath
from allmydata.util import log
from allmydata.util.fileutil import abspath_expanduser_unicode
@ -35,9 +36,10 @@ def check_encoding(encoding):
filesystem_encoding = None
io_encoding = None
is_unicode_platform = False
use_unicode_filepath = False
def _reload():
global filesystem_encoding, io_encoding, is_unicode_platform
global filesystem_encoding, io_encoding, is_unicode_platform, use_unicode_filepath
filesystem_encoding = canonical_encoding(sys.getfilesystemencoding())
check_encoding(filesystem_encoding)
@ -61,6 +63,12 @@ def _reload():
is_unicode_platform = sys.platform in ["win32", "darwin"]
# Despite the Unicode-mode FilePath support added to Twisted in
# <https://twistedmatrix.com/trac/ticket/7805>, we can't yet use
# Unicode-mode FilePaths with INotify on non-Windows platforms
# due to <https://twistedmatrix.com/trac/ticket/7928>.
use_unicode_filepath = sys.platform == "win32"
_reload()
@ -249,6 +257,54 @@ def quote_local_unicode_path(path, quotemarks=True):
return quote_output(path, quotemarks=quotemarks, quote_newlines=True)
def quote_filepath(path, quotemarks=True):
return quote_local_unicode_path(unicode_from_filepath(path), quotemarks=quotemarks)
def extend_filepath(fp, segments):
# We cannot use FilePath.preauthChild, because
# * it has the security flaw described in <https://twistedmatrix.com/trac/ticket/6527>;
# * it may return a FilePath in the wrong mode.
for segment in segments:
fp = fp.child(segment)
if isinstance(fp.path, unicode) and not use_unicode_filepath:
return FilePath(fp.path.encode(filesystem_encoding))
else:
return fp
def to_filepath(path):
precondition(isinstance(path, unicode if use_unicode_filepath else basestring),
path=path)
if isinstance(path, unicode) and not use_unicode_filepath:
path = path.encode(filesystem_encoding)
if sys.platform == "win32":
_assert(isinstance(path, unicode), path=path)
if path.startswith(u"\\\\?\\") and len(path) > 4:
# FilePath normally strips trailing path separators, but not in this case.
path = path.rstrip(u"\\")
return FilePath(path)
def _decode(s):
precondition(isinstance(s, basestring), s=s)
if isinstance(s, bytes):
return s.decode(filesystem_encoding)
else:
return s
def unicode_from_filepath(fp):
precondition(isinstance(fp, FilePath), fp=fp)
return _decode(fp.path)
def unicode_segments_from(base_fp, ancestor_fp):
precondition(isinstance(base_fp, FilePath), base_fp=base_fp)
precondition(isinstance(ancestor_fp, FilePath), ancestor_fp=ancestor_fp)
return base_fp.asTextMode().segmentsFrom(ancestor_fp.asTextMode())
def unicode_platform():
"""
@ -296,3 +352,6 @@ def listdir_unicode(path):
return os.listdir(path)
else:
return listdir_unicode_fallback(path)
def listdir_filepath(fp):
return listdir_unicode(unicode_from_filepath(fp))

View File

@ -248,41 +248,28 @@ def move_into_place(source, dest):
os.rename(source, dest)
def write_atomically(target, contents, mode="b"):
f = open(target+".tmp", "w"+mode)
try:
with open(target+".tmp", "w"+mode) as f:
f.write(contents)
finally:
f.close()
move_into_place(target+".tmp", target)
def write(path, data, mode="wb"):
wf = open(path, mode)
try:
wf.write(data)
finally:
wf.close()
with open(path, mode) as f:
f.write(data)
def read(path):
rf = open(path, "rb")
try:
with open(path, "rb") as rf:
return rf.read()
finally:
rf.close()
def put_file(path, inf):
precondition_abspath(path)
# TODO: create temporary file and move into place?
outf = open(path, "wb")
try:
with open(path, "wb") as outf:
while True:
data = inf.read(32768)
if not data:
break
outf.write(data)
finally:
outf.close()
def precondition_abspath(path):
if not isinstance(path, unicode):
@ -671,30 +658,33 @@ else:
except EnvironmentError:
reraise(ConflictError)
PathInfo = namedtuple('PathInfo', 'isdir isfile islink exists size mtime ctime')
PathInfo = namedtuple('PathInfo', 'isdir isfile islink exists size mtime_ns ctime_ns')
def get_pathinfo(path_u, now=None):
def seconds_to_ns(t):
return int(t * 1000000000)
def get_pathinfo(path_u, now_ns=None):
try:
statinfo = os.lstat(path_u)
mode = statinfo.st_mode
return PathInfo(isdir =stat.S_ISDIR(mode),
isfile=stat.S_ISREG(mode),
islink=stat.S_ISLNK(mode),
exists=True,
size =statinfo.st_size,
mtime =statinfo.st_mtime,
ctime =statinfo.st_ctime,
return PathInfo(isdir =stat.S_ISDIR(mode),
isfile =stat.S_ISREG(mode),
islink =stat.S_ISLNK(mode),
exists =True,
size =statinfo.st_size,
mtime_ns=seconds_to_ns(statinfo.st_mtime),
ctime_ns=seconds_to_ns(statinfo.st_ctime),
)
except OSError as e:
if e.errno == ENOENT:
if now is None:
now = time.time()
return PathInfo(isdir =False,
isfile=False,
islink=False,
exists=False,
size =None,
mtime =now,
ctime =now,
if now_ns is None:
now_ns = seconds_to_ns(time.time())
return PathInfo(isdir =False,
isfile =False,
islink =False,
exists =False,
size =None,
mtime_ns=now_ns,
ctime_ns=now_ns,
)
raise

View File

@ -395,10 +395,10 @@ class TokenOnlyWebApi(resource.Resource):
provide the "t=" argument to indicate the return-value (the only
valid value for this is "json")
Subclasses should override '_render_json' which should process the
API call and return a valid JSON object. This will only be called
if the correct token is present and valid (during renderHTTP
processing).
Subclasses should override 'post_json' which should process the
API call and return a string which encodes a valid JSON
object. This will only be called if the correct token is present
and valid (during renderHTTP processing).
"""
def __init__(self, client):
@ -416,7 +416,7 @@ class TokenOnlyWebApi(resource.Resource):
# argument to work if you passed it as a GET-style argument
token = None
if req.fields and 'token' in req.fields:
token = req.fields['token'].value[0]
token = req.fields['token'].value.strip()
if not token:
raise WebError("Missing token", http.UNAUTHORIZED)
if not timing_safe_compare(token, self.client.get_auth_token()):

View File

@ -0,0 +1,41 @@
import simplejson
from allmydata.web.common import TokenOnlyWebApi
class MagicFolderWebApi(TokenOnlyWebApi):
"""
I provide the web-based API for Magic Folder status etc.
"""
def __init__(self, client):
TokenOnlyWebApi.__init__(self, client)
self.client = client
def post_json(self, req):
req.setHeader("content-type", "application/json")
data = []
for item in self.client._magic_folder.uploader.get_status():
d = dict(
path=item.relpath_u,
status=item.status_history()[-1][0],
kind='upload',
)
for (status, ts) in item.status_history():
d[status + '_at'] = ts
d['percent_done'] = item.progress.progress
data.append(d)
for item in self.client._magic_folder.downloader.get_status():
d = dict(
path=item.relpath_u,
status=item.status_history()[-1][0],
kind='download',
)
for (status, ts) in item.status_history():
d[status + '_at'] = ts
d['percent_done'] = item.progress.progress
data.append(d)
return simplejson.dumps(data)

View File

@ -12,7 +12,7 @@ from allmydata import get_package_versions_string
from allmydata.util import log
from allmydata.interfaces import IFileNode
from allmydata.web import filenode, directory, unlinked, status, operations
from allmydata.web import storage
from allmydata.web import storage, magic_folder
from allmydata.web.common import abbreviate_size, getxmlfile, WebError, \
get_arg, RenderMixin, get_format, get_mutable_type, render_time_delta, render_time, render_time_attr
@ -154,6 +154,9 @@ class Root(rend.Page):
self.child_uri = URIHandler(client)
self.child_cap = URIHandler(client)
# handler for "/magic_folder" URIs
self.child_magic_folder = magic_folder.MagicFolderWebApi(client)
self.child_file = FileHandler(client)
self.child_named = FileHandler(client)
self.child_status = status.Status(client.get_history())

View File

@ -20,13 +20,16 @@
<li>Files Retrieved (mutable): <span n:render="retrieves" /></li>
</ul>
<h2>Drop-Uploader</h2>
<h2>Magic Folder</h2>
<ul>
<li>Local Directories Monitored: <span n:render="drop_monitored" /></li>
<li>Files Uploaded: <span n:render="drop_uploads" /></li>
<li>File Changes Queued: <span n:render="drop_queued" /></li>
<li>Failed Uploads: <span n:render="drop_failed" /></li>
<li>Local Directories Monitored: <span n:render="magic_uploader_monitored" /></li>
<li>Files Uploaded: <span n:render="magic_uploader_succeeded" /></li>
<li>Files Queued for Upload: <span n:render="magic_uploader_queued" /></li>
<li>Failed Uploads: <span n:render="magic_uploader_failed" /></li>
<li>Files Downloaded: <span n:render="magic_downloader_succeeded" /></li>
<li>Files Queued for Download: <span n:render="magic_downloader_queued" /></li>
<li>Failed Downloads: <span n:render="magic_downloader_failed" /></li>
</ul>
<h2>Raw Stats:</h2>

View File

@ -1273,21 +1273,34 @@ class Statistics(rend.Page):
return "%s files / %s bytes (%s)" % (files, bytes,
abbreviate_size(bytes))
def render_drop_monitored(self, ctx, data):
dirs = data["counters"].get("drop_upload.dirs_monitored", 0)
def render_magic_uploader_monitored(self, ctx, data):
dirs = data["counters"].get("magic_folder.uploader.dirs_monitored", 0)
return "%s directories" % (dirs,)
def render_drop_uploads(self, ctx, data):
def render_magic_uploader_succeeded(self, ctx, data):
# TODO: bytes uploaded
files = data["counters"].get("drop_upload.files_uploaded", 0)
files = data["counters"].get("magic_folder.uploader.objects_succeeded", 0)
return "%s files" % (files,)
def render_drop_queued(self, ctx, data):
files = data["counters"].get("drop_upload.files_queued", 0)
def render_magic_uploader_queued(self, ctx, data):
files = data["counters"].get("magic_folder.uploader.objects_queued", 0)
return "%s files" % (files,)
def render_drop_failed(self, ctx, data):
files = data["counters"].get("drop_upload.files_failed", 0)
def render_magic_uploader_failed(self, ctx, data):
files = data["counters"].get("magic_folder.uploader.objects_failed", 0)
return "%s files" % (files,)
def render_magic_downloader_succeeded(self, ctx, data):
# TODO: bytes uploaded
files = data["counters"].get("magic_folder.downloader.objects_succeeded", 0)
return "%s files" % (files,)
def render_magic_downloader_queued(self, ctx, data):
files = data["counters"].get("magic_folder.downloader.objects_queued", 0)
return "%s files" % (files,)
def render_magic_downloader_failed(self, ctx, data):
files = data["counters"].get("magic_folder.downloader.objects_failed", 0)
return "%s files" % (files,)
def render_raw(self, ctx, data):

View File

@ -0,0 +1,320 @@
# Windows near-equivalent to twisted.internet.inotify
# This should only be imported on Windows.
import os, sys
from twisted.internet import reactor
from twisted.internet.threads import deferToThread
from allmydata.util.fake_inotify import humanReadableMask, \
IN_WATCH_MASK, IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_NOWRITE, IN_CLOSE_WRITE, \
IN_OPEN, IN_MOVED_FROM, IN_MOVED_TO, IN_CREATE, IN_DELETE, IN_DELETE_SELF, \
IN_MOVE_SELF, IN_UNMOUNT, IN_Q_OVERFLOW, IN_IGNORED, IN_ONLYDIR, IN_DONT_FOLLOW, \
IN_MASK_ADD, IN_ISDIR, IN_ONESHOT, IN_CLOSE, IN_MOVED, IN_CHANGED
[humanReadableMask, \
IN_WATCH_MASK, IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_NOWRITE, IN_CLOSE_WRITE, \
IN_OPEN, IN_MOVED_FROM, IN_MOVED_TO, IN_CREATE, IN_DELETE, IN_DELETE_SELF, \
IN_MOVE_SELF, IN_UNMOUNT, IN_Q_OVERFLOW, IN_IGNORED, IN_ONLYDIR, IN_DONT_FOLLOW, \
IN_MASK_ADD, IN_ISDIR, IN_ONESHOT, IN_CLOSE, IN_MOVED, IN_CHANGED]
from allmydata.util.assertutil import _assert, precondition
from allmydata.util.encodingutil import quote_output
from allmydata.util import log, fileutil
from allmydata.util.pollmixin import PollMixin
from ctypes import WINFUNCTYPE, WinError, windll, POINTER, byref, create_string_buffer, \
addressof, get_last_error
from ctypes.wintypes import BOOL, HANDLE, DWORD, LPCWSTR, LPVOID
# <http://msdn.microsoft.com/en-us/library/gg258116%28v=vs.85%29.aspx>
FILE_LIST_DIRECTORY = 1
# <http://msdn.microsoft.com/en-us/library/aa363858%28v=vs.85%29.aspx>
CreateFileW = WINFUNCTYPE(
HANDLE, LPCWSTR, DWORD, DWORD, LPVOID, DWORD, DWORD, HANDLE,
use_last_error=True
)(("CreateFileW", windll.kernel32))
FILE_SHARE_READ = 0x00000001
FILE_SHARE_WRITE = 0x00000002
FILE_SHARE_DELETE = 0x00000004
OPEN_EXISTING = 3
FILE_FLAG_BACKUP_SEMANTICS = 0x02000000
# <http://msdn.microsoft.com/en-us/library/ms724211%28v=vs.85%29.aspx>
CloseHandle = WINFUNCTYPE(
BOOL, HANDLE,
use_last_error=True
)(("CloseHandle", windll.kernel32))
# <http://msdn.microsoft.com/en-us/library/aa365465%28v=vs.85%29.aspx>
ReadDirectoryChangesW = WINFUNCTYPE(
BOOL, HANDLE, LPVOID, DWORD, BOOL, DWORD, POINTER(DWORD), LPVOID, LPVOID,
use_last_error=True
)(("ReadDirectoryChangesW", windll.kernel32))
FILE_NOTIFY_CHANGE_FILE_NAME = 0x00000001
FILE_NOTIFY_CHANGE_DIR_NAME = 0x00000002
FILE_NOTIFY_CHANGE_ATTRIBUTES = 0x00000004
#FILE_NOTIFY_CHANGE_SIZE = 0x00000008
FILE_NOTIFY_CHANGE_LAST_WRITE = 0x00000010
FILE_NOTIFY_CHANGE_LAST_ACCESS = 0x00000020
#FILE_NOTIFY_CHANGE_CREATION = 0x00000040
FILE_NOTIFY_CHANGE_SECURITY = 0x00000100
# <http://msdn.microsoft.com/en-us/library/aa364391%28v=vs.85%29.aspx>
FILE_ACTION_ADDED = 0x00000001
FILE_ACTION_REMOVED = 0x00000002
FILE_ACTION_MODIFIED = 0x00000003
FILE_ACTION_RENAMED_OLD_NAME = 0x00000004
FILE_ACTION_RENAMED_NEW_NAME = 0x00000005
_action_to_string = {
FILE_ACTION_ADDED : "FILE_ACTION_ADDED",
FILE_ACTION_REMOVED : "FILE_ACTION_REMOVED",
FILE_ACTION_MODIFIED : "FILE_ACTION_MODIFIED",
FILE_ACTION_RENAMED_OLD_NAME : "FILE_ACTION_RENAMED_OLD_NAME",
FILE_ACTION_RENAMED_NEW_NAME : "FILE_ACTION_RENAMED_NEW_NAME",
}
_action_to_inotify_mask = {
FILE_ACTION_ADDED : IN_CREATE,
FILE_ACTION_REMOVED : IN_DELETE,
FILE_ACTION_MODIFIED : IN_CHANGED,
FILE_ACTION_RENAMED_OLD_NAME : IN_MOVED_FROM,
FILE_ACTION_RENAMED_NEW_NAME : IN_MOVED_TO,
}
INVALID_HANDLE_VALUE = 0xFFFFFFFF
TRUE = 0
FALSE = 1
class Event(object):
"""
* action: a FILE_ACTION_* constant (not a bit mask)
* filename: a Unicode string, giving the name relative to the watched directory
"""
def __init__(self, action, filename):
self.action = action
self.filename = filename
def __repr__(self):
return "Event(%r, %r)" % (_action_to_string.get(self.action, self.action), self.filename)
class FileNotifyInformation(object):
"""
I represent a buffer containing FILE_NOTIFY_INFORMATION structures, and can
iterate over those structures, decoding them into Event objects.
"""
def __init__(self, size=1024):
self.size = size
self.buffer = create_string_buffer(size)
address = addressof(self.buffer)
_assert(address & 3 == 0, "address 0x%X returned by create_string_buffer is not DWORD-aligned" % (address,))
self.data = None
def read_changes(self, hDirectory, recursive, filter):
bytes_returned = DWORD(0)
r = ReadDirectoryChangesW(hDirectory,
self.buffer,
self.size,
recursive,
filter,
byref(bytes_returned),
None, # NULL -> no overlapped I/O
None # NULL -> no completion routine
)
if r == 0:
self.data = None
raise WinError(get_last_error())
self.data = self.buffer.raw[:bytes_returned.value]
def __iter__(self):
# Iterator implemented as generator: <http://docs.python.org/library/stdtypes.html#generator-types>
if self.data is None:
return
pos = 0
while True:
bytes = self._read_dword(pos+8)
s = Event(self._read_dword(pos+4),
self.data[pos+12 : pos+12+bytes].decode('utf-16-le'))
next_entry_offset = self._read_dword(pos)
yield s
if next_entry_offset == 0:
break
pos = pos + next_entry_offset
def _read_dword(self, i):
# little-endian
return ( ord(self.data[i]) |
(ord(self.data[i+1]) << 8) |
(ord(self.data[i+2]) << 16) |
(ord(self.data[i+3]) << 24))
def _open_directory(path_u):
hDirectory = CreateFileW(path_u,
FILE_LIST_DIRECTORY, # access rights
FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
# don't prevent other processes from accessing
None, # no security descriptor
OPEN_EXISTING, # directory must already exist
FILE_FLAG_BACKUP_SEMANTICS, # necessary to open a directory
None # no template file
)
if hDirectory == INVALID_HANDLE_VALUE:
e = WinError(get_last_error())
raise OSError("Opening directory %s gave WinError: %s" % (quote_output(path_u), e))
return hDirectory
def simple_test():
path_u = u"test"
filter = FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_LAST_WRITE
recursive = FALSE
hDirectory = _open_directory(path_u)
fni = FileNotifyInformation()
print "Waiting..."
while True:
fni.read_changes(hDirectory, recursive, filter)
print repr(fni.data)
for info in fni:
print info
NOT_STARTED = "NOT_STARTED"
STARTED = "STARTED"
STOPPING = "STOPPING"
STOPPED = "STOPPED"
class INotify(PollMixin):
def __init__(self):
self._state = NOT_STARTED
self._filter = None
self._callbacks = None
self._hDirectory = None
self._path = None
self._pending = set()
self._pending_delay = 1.0
self._pending_call = None
self.recursive_includes_new_subdirectories = True
def set_pending_delay(self, delay):
self._pending_delay = delay
def startReading(self):
deferToThread(self._thread)
return self.poll(lambda: self._state != NOT_STARTED)
def stopReading(self):
# FIXME race conditions
if self._state != STOPPED:
self._state = STOPPING
if self._pending_call:
self._pending_call.cancel()
self._pending_call = None
def wait_until_stopped(self):
try:
fileutil.write(os.path.join(self._path.path, u".ignore-me"), "")
except IOError:
pass
return self.poll(lambda: self._state == STOPPED)
def watch(self, path, mask=IN_WATCH_MASK, autoAdd=False, callbacks=None, recursive=False):
precondition(self._state == NOT_STARTED, "watch() can only be called before startReading()", state=self._state)
precondition(self._filter is None, "only one watch is supported")
precondition(isinstance(autoAdd, bool), autoAdd=autoAdd)
precondition(isinstance(recursive, bool), recursive=recursive)
#precondition(autoAdd == recursive, "need autoAdd and recursive to be the same", autoAdd=autoAdd, recursive=recursive)
self._path = path
path_u = path.path
if not isinstance(path_u, unicode):
path_u = path_u.decode(sys.getfilesystemencoding())
_assert(isinstance(path_u, unicode), path_u=path_u)
self._filter = FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_LAST_WRITE
if mask & (IN_ACCESS | IN_CLOSE_NOWRITE | IN_OPEN):
self._filter = self._filter | FILE_NOTIFY_CHANGE_LAST_ACCESS
if mask & IN_ATTRIB:
self._filter = self._filter | FILE_NOTIFY_CHANGE_ATTRIBUTES | FILE_NOTIFY_CHANGE_SECURITY
self._recursive = TRUE if recursive else FALSE
self._callbacks = callbacks or []
self._hDirectory = _open_directory(path_u)
def _thread(self):
try:
_assert(self._filter is not None, "no watch set")
# To call Twisted or Tahoe APIs, use reactor.callFromThread as described in
# <http://twistedmatrix.com/documents/current/core/howto/threading.html>.
fni = FileNotifyInformation()
while True:
self._state = STARTED
try:
fni.read_changes(self._hDirectory, self._recursive, self._filter)
except WindowsError as e:
self._state = STOPPING
if self._check_stop():
return
for info in fni:
# print info
path = self._path.preauthChild(info.filename) # FilePath with Unicode path
if info.action == FILE_ACTION_MODIFIED and path.isdir():
# print "Filtering out %r" % (info,)
continue
#mask = _action_to_inotify_mask.get(info.action, IN_CHANGED)
def _do_pending_calls():
self._pending_call = None
for path in self._pending:
if self._callbacks:
for cb in self._callbacks:
try:
cb(None, path, IN_CHANGED)
except Exception, e:
log.err(e)
self._pending = set()
def _maybe_notify(path):
if path not in self._pending:
self._pending.add(path)
if self._state not in [STOPPING, STOPPED]:
_do_pending_calls()
# if self._pending_call is None and self._state not in [STOPPING, STOPPED]:
# self._pending_call = reactor.callLater(self._pending_delay, _do_pending_calls)
reactor.callFromThread(_maybe_notify, path)
if self._check_stop():
return
except Exception, e:
log.err(e)
self._state = STOPPED
raise
def _check_stop(self):
if self._state == STOPPING:
hDirectory = self._hDirectory
self._callbacks = None
self._hDirectory = None
CloseHandle(hDirectory)
self._state = STOPPED
if self._pending_call:
self._pending_call.cancel()
self._pending_call = None
return self._state == STOPPED