A general purpose uploader module for Net::SFTP. It can upload IO objects, files, and even entire directory trees via SFTP, and provides a flexible progress reporting mechanism.
To upload a single file to the remote server, simply specify both the local and remote paths:
uploader = sftp.upload("/path/to/local.txt", "/path/to/remote.txt")
By default, this operates asynchronously, so if you want to block until the upload finishes, you can use the 'bang' variant:
sftp.upload!("/path/to/local.txt", "/path/to/remote.txt")
Or, if you have multiple uploads that you want to run in parallel, you can employ the wait method of the returned object:
uploads = %w(file1 file2 file3).map { |f| sftp.upload(f, "remote/#{f}") } uploads.each { |u| u.wait }
To upload an entire directory tree, recursively, simply pass the directory path as the first parameter:
sftp.upload!("/path/to/directory", "/path/to/remote")
This will upload "/path/to/directory", it's contents, it's subdirectories, and their contents, recursively, to "/path/to/remote" on the remote server.
For uploading a directory without creating it, do sftp.upload!("/path/to/directory", "/path/to/remote", :mkdir => false)
If you want to send data to a file on the remote server, but the data is in memory, you can pass an IO object and upload it's contents:
require 'stringio' io = StringIO.new(data) sftp.upload!(io, "/path/to/remote")
The following options are supported:
-
:progress
- either a block or an object to act as a progress callback. See the discussion of "progress monitoring" below. -
:requests
- the number of pending SFTP requests to allow at any given time. When uploading an entire directory tree recursively, this will default to 16, otherwise it will default to 2. Setting this higher might improve throughput. Reducing it will reduce throughput. -
:read_size
- the maximum number of bytes to read at a time from the source. Increasing this value might improve throughput. It defaults to 32,000 bytes. -
:name
- the filename to report to the progress monitor when an IO object is given aslocal
. This defaults to "<memory>".
Progress Monitoring
Sometimes it is desirable to track the progress of an upload. There are two ways to do this: either using a callback block, or a special custom object.
Using a block it's pretty straightforward:
sftp.upload!("local", "remote") do |event, uploader, *args| case event when :open then # args[0] : file metadata puts "starting upload: #{args[0].local} -> #{args[0].remote} (#{args[0].size} bytes}" when :put then # args[0] : file metadata # args[1] : byte offset in remote file # args[2] : data being written (as string) puts "writing #{args[2].length} bytes to #{args[0].remote} starting at #{args[1]}" when :close then # args[0] : file metadata puts "finished with #{args[0].remote}" when :mkdir then # args[0] : remote path name puts "creating directory #{args[0]}" when :finish then puts "all done!" end
However, for more complex implementations (e.g., GUI interfaces and such) a block can become cumbersome. In those cases, you can create custom handler objects that respond to certain methods, and then pass your handler to the uploader:
class CustomHandler def on_open(uploader, file) puts "starting upload: #{file.local} -> #{file.remote} (#{file.size} bytes)" end def on_put(uploader, file, offset, data) puts "writing #{data.length} bytes to #{file.remote} starting at #{offset}" end def on_close(uploader, file) puts "finished with #{file.remote}" end def on_mkdir(uploader, path) puts "creating directory #{path}" end def on_finish(uploader) puts "all done!" end end sftp.upload!("local", "remote", :progress => CustomHandler.new)
If you omit any of those methods, the progress updates for those missing events will be ignored. You can create a catchall method named "call" for those, instead.
Included modules
- Net::SSH::Loggable
Constants
DEFAULT_READ_SIZE | = | 32_000 |
The default # of bytes to read from disk at a time. |
|
LiveFile | = | Struct.new(:local, :remote, :io, :size, :handle) |
A simple struct for recording metadata about the file currently being uploaded. |
|
RECURSIVE_READERS | = | 16 |
The number of readers to use when uploading a directory. |
|
SINGLE_FILE_READERS | = | 2 |
The number of readers to use when uploading a single file. |
Attributes
local | [R] |
The source of the upload (on the local server) |
options | [R] |
The hash of options that were given when the object was instantiated |
properties | [R] |
The properties hash for this object |
remote | [R] |
The destination of the upload (on the remote server) |
sftp | [R] |
The SFTP session object used by this upload instance |
Public Instance methods
Returns the property with the given name. This allows Upload instances to store their own state when used as part of a state machine.
# File lib/net/sftp/operations/upload.rb, line 209 def [](name) @properties[name.to_sym] end
Sets the given property to the given name. This allows Upload instances to store their own state when used as part of a state machine.
# File lib/net/sftp/operations/upload.rb, line 215 def []=(name, value) @properties[name.to_sym] = value end
Forces the transfer to stop.
# File lib/net/sftp/operations/upload.rb, line 195 def abort! @active = 0 @stack.clear @uploads.clear end
Returns true if the uploader is currently running. When this is false, the uploader has finished processing.
# File lib/net/sftp/operations/upload.rb, line 190 def active? @active > 0 || @stack.any? end
Returns true if a directory tree is being uploaded, and false if only a single file is being uploaded.
# File lib/net/sftp/operations/upload.rb, line 184 def recursive? @recursive end
Blocks until the upload has completed.
# File lib/net/sftp/operations/upload.rb, line 202 def wait sftp.loop { active? } self end