Merge branch 'mitmweb-options' of https://github.com/MatthewShao/mitmproxy into mitmweb-options

This commit is contained in:
Matthew Shao 2017-07-18 09:14:28 +08:00
commit 70bb123101
39 changed files with 1103 additions and 250 deletions

View File

@ -16,7 +16,7 @@ mechanism:
If you want to peek into (SSL-protected) non-HTTP connections, check out the :ref:`tcpproxy` If you want to peek into (SSL-protected) non-HTTP connections, check out the :ref:`tcpproxy`
feature. feature.
If you want to ignore traffic from mitmproxy's processing because of large response bodies, If you want to ignore traffic from mitmproxy's processing because of large response bodies,
take a look at the :ref:`responsestreaming` feature. take a look at the :ref:`streaming` feature.
How it works How it works
------------ ------------
@ -89,7 +89,7 @@ Here are some other examples for ignore patterns:
.. seealso:: .. seealso::
- :ref:`tcpproxy` - :ref:`tcpproxy`
- :ref:`responsestreaming` - :ref:`streaming`
- mitmproxy's "Limit" feature - mitmproxy's "Limit" feature
.. rubric:: Footnotes .. rubric:: Footnotes

View File

@ -1,68 +0,0 @@
.. _responsestreaming:
Response Streaming
==================
By using mitmproxy's streaming feature, response contents can be passed to the client incrementally
before they have been fully received by the proxy. This is especially useful for large binary files
such as videos, where buffering the whole file slows down the client's browser.
By default, mitmproxy will read the entire response, perform any indicated
manipulations on it and then send the (possibly modified) response to
the client. In some cases this is undesirable and you may wish to "stream"
the response back to the client. When streaming is enabled, the response is
not buffered on the proxy but directly sent back to the client instead.
On the command-line
-------------------
Streaming can be enabled on the command line for all response bodies exceeding a certain size.
The SIZE argument understands k/m/g suffixes, e.g. 3m for 3 megabytes.
================== =================
command-line ``--stream SIZE``
================== =================
.. warning::
When response streaming is enabled, **streamed response contents will not be
recorded or preserved in any way.**
.. note::
When response streaming is enabled, the response body cannot be modified by the usual means.
Customizing Response Streaming
------------------------------
You can also use a script to customize exactly which responses are streamed.
Responses that should be tagged for streaming by setting their ``.stream``
attribute to ``True``:
.. literalinclude:: ../../examples/complex/stream.py
:caption: examples/complex/stream.py
:language: python
Implementation Details
----------------------
When response streaming is enabled, portions of the code which would have otherwise performed
changes on the response body will see an empty response body. Any modifications will be ignored.
Streamed responses are usually sent in chunks of 4096 bytes. If the response is sent with a
``Transfer-Encoding: chunked`` header, the response will be streamed one chunk at a time.
Modifying streamed data
-----------------------
If the ``.stream`` attribute is callable, ``.stream`` will wrap the generator that yields all
chunks.
.. literalinclude:: ../../examples/complex/stream_modify.py
:caption: examples/complex/stream_modify.py
:language: python
.. seealso::
- :ref:`passthrough`

102
docs/features/streaming.rst Normal file
View File

@ -0,0 +1,102 @@
.. _streaming:
HTTP Streaming
==============
By default, mitmproxy will read the entire request/response, perform any indicated
manipulations on it and then send the (possibly modified) message to
the other party. In some cases this is undesirable and you may wish to "stream"
the request/response. When streaming is enabled, the request/response is
not buffered on the proxy but directly sent to the server/client instead.
HTTP headers are still fully buffered before being sent.
Request Streaming
-----------------
Request streaming can be used to incrementally stream a request body to the server
before it has been fully received by the proxy. This is useful for large file uploads.
Response Streaming
------------------
By using mitmproxy's streaming feature, response contents can be passed to the client incrementally
before they have been fully received by the proxy. This is especially useful for large binary files
such as videos, where buffering the whole file slows down the client's browser.
On the command-line
-------------------
Streaming can be enabled on the command line for all request and response bodies exceeding a certain size.
The SIZE argument understands k/m/g suffixes, e.g. 3m for 3 megabytes.
================== =================
command-line ``--set stream_large_bodies=SIZE``
================== =================
.. warning::
When streaming is enabled, **streamed request/response contents will not be
recorded or preserved in any way.**
.. note::
When streaming is enabled, the request/response body cannot be modified by the usual means.
Customizing Streaming
---------------------
You can also use a script to customize exactly which requests or responses are streamed.
Requests/Responses that should be tagged for streaming by setting their ``.stream``
attribute to ``True``:
.. literalinclude:: ../../examples/complex/stream.py
:caption: examples/complex/stream.py
:language: python
Implementation Details
----------------------
When response streaming is enabled, portions of the code which would have otherwise performed
changes on the request/response body will see an empty body. Any modifications will be ignored.
Streamed bodies are usually sent in chunks of 4096 bytes. If the response is sent with a
``Transfer-Encoding: chunked`` header, the response will be streamed one chunk at a time.
Modifying streamed data
-----------------------
If the ``.stream`` attribute is callable, ``.stream`` will wrap the generator that yields all
chunks.
.. literalinclude:: ../../examples/complex/stream_modify.py
:caption: examples/complex/stream_modify.py
:language: python
WebSocket Streaming
===================
The WebSocket streaming feature can be used to send the frames as soon as they arrive. This can be useful for large binary file transfers.
On the command-line
-------------------
Streaming can be enabled on the command line for all WebSocket frames
================== =================
command-line ``--set stream_websockets=true``
================== =================
.. note::
When Web Socket streaming is enabled, the message payload cannot be modified.
Implementation Details
----------------------
When WebSocket streaming is enabled, portions of the code which may perform changes to the WebSocket message payloads will not have
any effect on the actual payload sent to the server as the frames are immediately forwarded to the server.
In contrast to HTTP streaming, where the body is not stored, the message payload will still be stored in the WebSocket Flow.
.. seealso::
- :ref:`passthrough`

View File

@ -28,4 +28,4 @@ feature.
.. seealso:: .. seealso::
- :ref:`passthrough` - :ref:`passthrough`
- :ref:`responsestreaming` - :ref:`streaming`

View File

@ -33,7 +33,7 @@
features/passthrough features/passthrough
features/proxyauth features/proxyauth
features/reverseproxy features/reverseproxy
features/responsestreaming features/streaming
features/socksproxy features/socksproxy
features/sticky features/sticky
features/tcpproxy features/tcpproxy

View File

@ -1,6 +1,6 @@
def responseheaders(flow): def responseheaders(flow):
""" """
Enables streaming for all responses. Enables streaming for all responses.
This is equivalent to passing `--stream 0` to mitmproxy. This is equivalent to passing `--set stream_large_bodies=1` to mitmproxy.
""" """
flow.response.stream = True flow.response.stream = True

View File

@ -52,11 +52,19 @@ class Script:
def tick(self): def tick(self):
if time.time() - self.last_load > self.ReloadInterval: if time.time() - self.last_load > self.ReloadInterval:
mtime = os.stat(self.fullpath).st_mtime try:
mtime = os.stat(self.fullpath).st_mtime
except FileNotFoundError:
scripts = ctx.options.scripts
scripts.remove(self.path)
ctx.options.update(scripts=scripts)
return
if mtime > self.last_mtime: if mtime > self.last_mtime:
ctx.log.info("Loading script: %s" % self.path) ctx.log.info("Loading script: %s" % self.path)
if self.ns: if self.ns:
ctx.master.addons.remove(self.ns) ctx.master.addons.remove(self.ns)
del sys.modules[self.ns.__name__]
self.ns = load_script(ctx, self.fullpath) self.ns = load_script(ctx, self.fullpath)
if self.ns: if self.ns:
# We're already running, so we have to explicitly register and # We're already running, so we have to explicitly register and

View File

@ -28,12 +28,18 @@ class StreamBodies:
if expected_size and not r.raw_content and not (0 <= expected_size <= self.max_size): if expected_size and not r.raw_content and not (0 <= expected_size <= self.max_size):
# r.stream may already be a callable, which we want to preserve. # r.stream may already be a callable, which we want to preserve.
r.stream = r.stream or True r.stream = r.stream or True
# FIXME: make message generic when we add rquest streaming ctx.log.info("Streaming {} {}".format("response from" if not is_request else "request to", f.request.host))
ctx.log.info("Streaming response from %s" % f.request.host)
# FIXME! Request streaming doesn't work at the moment.
def requestheaders(self, f): def requestheaders(self, f):
self.run(f, True) self.run(f, True)
def responseheaders(self, f): def responseheaders(self, f):
self.run(f, False) self.run(f, False)
def websocket_start(self, f):
if ctx.options.stream_websockets:
f.stream = True
ctx.log.info("Streaming WebSocket messages between {client} and {server}".format(
client=human.format_address(f.client_conn.address),
server=human.format_address(f.server_conn.address))
)

View File

@ -1,6 +1,63 @@
import subprocess import io
from kaitaistruct import KaitaiStream
from . import base from . import base
from mitmproxy.contrib.kaitaistruct import google_protobuf
def write_buf(out, field_tag, body, indent_level):
if body is not None:
out.write("{: <{level}}{}: {}\n".format('', field_tag, body if isinstance(body, int) else str(body, 'utf-8'),
level=indent_level))
elif field_tag is not None:
out.write(' ' * indent_level + str(field_tag) + " {\n")
else:
out.write(' ' * indent_level + "}\n")
def format_pbuf(raw):
out = io.StringIO()
stack = []
try:
buf = google_protobuf.GoogleProtobuf(KaitaiStream(io.BytesIO(raw)))
except:
return False
stack.extend([(pair, 0) for pair in buf.pairs[::-1]])
while len(stack):
pair, indent_level = stack.pop()
if pair.wire_type == pair.WireTypes.group_start:
body = None
elif pair.wire_type == pair.WireTypes.group_end:
body = None
pair._m_field_tag = None
elif pair.wire_type == pair.WireTypes.len_delimited:
body = pair.value.body
elif pair.wire_type == pair.WireTypes.varint:
body = pair.value.value
else:
body = pair.value
try:
next_buf = google_protobuf.GoogleProtobuf(KaitaiStream(io.BytesIO(body)))
stack.extend([(pair, indent_level + 2) for pair in next_buf.pairs[::-1]])
write_buf(out, pair.field_tag, None, indent_level)
except:
write_buf(out, pair.field_tag, body, indent_level)
if stack:
prev_level = stack[-1][1]
else:
prev_level = 0
if prev_level < indent_level:
levels = int((indent_level - prev_level) / 2)
for i in range(1, levels + 1):
write_buf(out, None, None, indent_level - i * 2)
return out.getvalue()
class ViewProtobuf(base.View): class ViewProtobuf(base.View):
@ -15,28 +72,9 @@ class ViewProtobuf(base.View):
"application/x-protobuffer", "application/x-protobuffer",
] ]
def is_available(self):
try:
p = subprocess.Popen(
["protoc", "--version"],
stdout=subprocess.PIPE
)
out, _ = p.communicate()
return out.startswith(b"libprotoc")
except:
return False
def __call__(self, data, **metadata): def __call__(self, data, **metadata):
if not self.is_available(): decoded = format_pbuf(data)
raise NotImplementedError("protoc not found. Please make sure 'protoc' is available in $PATH.")
# if Popen raises OSError, it will be caught in
# get_content_view and fall back to Raw
p = subprocess.Popen(['protoc', '--decode_raw'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
decoded, _ = p.communicate(input=data)
if not decoded: if not decoded:
raise ValueError("Failed to parse input.") raise ValueError("Failed to parse input.")
return "Protobuf", base.format_text(decoded) return "Protobuf", base.format_text(decoded)

View File

@ -0,0 +1,124 @@
# This is a generated file! Please edit source .ksy file and use kaitai-struct-compiler to rebuild
from pkg_resources import parse_version
from kaitaistruct import __version__ as ks_version, KaitaiStruct, KaitaiStream, BytesIO
from enum import Enum
if parse_version(ks_version) < parse_version('0.7'):
raise Exception("Incompatible Kaitai Struct Python API: 0.7 or later is required, but you have %s" % (ks_version))
from .vlq_base128_le import VlqBase128Le
class GoogleProtobuf(KaitaiStruct):
"""Google Protocol Buffers (AKA protobuf) is a popular data
serialization scheme used for communication protocols, data storage,
etc. There are implementations are available for almost every
popular language. The focus points of this scheme are brevity (data
is encoded in a very size-efficient manner) and extensibility (one
can add keys to the structure, while keeping it readable in previous
version of software).
Protobuf uses semi-self-describing encoding scheme for its
messages. It means that it is possible to parse overall structure of
the message (skipping over fields one can't understand), but to
fully understand the message, one needs a protocol definition file
(`.proto`). To be specific:
* "Keys" in key-value pairs provided in the message are identified
only with an integer "field tag". `.proto` file provides info on
which symbolic field names these field tags map to.
* "Keys" also provide something called "wire type". It's not a data
type in its common sense (i.e. you can't, for example, distinguish
`sint32` vs `uint32` vs some enum, or `string` from `bytes`), but
it's enough information to determine how many bytes to
parse. Interpretation of the value should be done according to the
type specified in `.proto` file.
* There's no direct information on which fields are optional /
required, which fields may be repeated or constitute a map, what
restrictions are placed on fields usage in a single message, what
are the fields' default values, etc, etc.
.. seealso::
Source - https://developers.google.com/protocol-buffers/docs/encoding
"""
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._read()
def _read(self):
self.pairs = []
while not self._io.is_eof():
self.pairs.append(self._root.Pair(self._io, self, self._root))
class Pair(KaitaiStruct):
"""Key-value pair."""
class WireTypes(Enum):
varint = 0
bit_64 = 1
len_delimited = 2
group_start = 3
group_end = 4
bit_32 = 5
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._read()
def _read(self):
self.key = VlqBase128Le(self._io)
_on = self.wire_type
if _on == self._root.Pair.WireTypes.varint:
self.value = VlqBase128Le(self._io)
elif _on == self._root.Pair.WireTypes.len_delimited:
self.value = self._root.DelimitedBytes(self._io, self, self._root)
elif _on == self._root.Pair.WireTypes.bit_64:
self.value = self._io.read_u8le()
elif _on == self._root.Pair.WireTypes.bit_32:
self.value = self._io.read_u4le()
@property
def wire_type(self):
""""Wire type" is a part of the "key" that carries enough
information to parse value from the wire, i.e. read correct
amount of bytes, but there's not enough informaton to
interprete in unambiguously. For example, one can't clearly
distinguish 64-bit fixed-sized integers from 64-bit floats,
signed zigzag-encoded varints from regular unsigned varints,
arbitrary bytes from UTF-8 encoded strings, etc.
"""
if hasattr(self, '_m_wire_type'):
return self._m_wire_type if hasattr(self, '_m_wire_type') else None
self._m_wire_type = self._root.Pair.WireTypes((self.key.value & 7))
return self._m_wire_type if hasattr(self, '_m_wire_type') else None
@property
def field_tag(self):
"""Identifies a field of protocol. One can look up symbolic
field name in a `.proto` file by this field tag.
"""
if hasattr(self, '_m_field_tag'):
return self._m_field_tag if hasattr(self, '_m_field_tag') else None
self._m_field_tag = (self.key.value >> 3)
return self._m_field_tag if hasattr(self, '_m_field_tag') else None
class DelimitedBytes(KaitaiStruct):
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._read()
def _read(self):
self.len = VlqBase128Le(self._io)
self.body = self._io.read_bytes(self.len.value)

View File

@ -7,5 +7,7 @@ wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master
wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/jpeg.ksy wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/jpeg.ksy
wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/png.ksy wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/png.ksy
wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/ico.ksy wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/ico.ksy
wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/common/vlq_base128_le.ksy
wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/serialization/google_protobuf.ksy
kaitai-struct-compiler --target python --opaque-types=true *.ksy kaitai-struct-compiler --target python --opaque-types=true *.ksy

View File

@ -0,0 +1,94 @@
# This is a generated file! Please edit source .ksy file and use kaitai-struct-compiler to rebuild
from pkg_resources import parse_version
from kaitaistruct import __version__ as ks_version, KaitaiStruct, KaitaiStream, BytesIO
if parse_version(ks_version) < parse_version('0.7'):
raise Exception("Incompatible Kaitai Struct Python API: 0.7 or later is required, but you have %s" % (ks_version))
class VlqBase128Le(KaitaiStruct):
"""A variable-length unsigned integer using base128 encoding. 1-byte groups
consists of 1-bit flag of continuation and 7-bit value, and are ordered
"least significant group first", i.e. in "little-endian" manner.
This particular encoding is specified and used in:
* DWARF debug file format, where it's dubbed "unsigned LEB128" or "ULEB128".
http://dwarfstd.org/doc/dwarf-2.0.0.pdf - page 139
* Google Protocol Buffers, where it's called "Base 128 Varints".
https://developers.google.com/protocol-buffers/docs/encoding?csw=1#varints
* Apache Lucene, where it's called "VInt"
http://lucene.apache.org/core/3_5_0/fileformats.html#VInt
* Apache Avro uses this as a basis for integer encoding, adding ZigZag on
top of it for signed ints
http://avro.apache.org/docs/current/spec.html#binary_encode_primitive
More information on this encoding is available at https://en.wikipedia.org/wiki/LEB128
This particular implementation supports serialized values to up 8 bytes long.
"""
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._read()
def _read(self):
self.groups = []
while True:
_ = self._root.Group(self._io, self, self._root)
self.groups.append(_)
if not (_.has_next):
break
class Group(KaitaiStruct):
"""One byte group, clearly divided into 7-bit "value" and 1-bit "has continuation
in the next byte" flag.
"""
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._read()
def _read(self):
self.b = self._io.read_u1()
@property
def has_next(self):
"""If true, then we have more bytes to read."""
if hasattr(self, '_m_has_next'):
return self._m_has_next if hasattr(self, '_m_has_next') else None
self._m_has_next = (self.b & 128) != 0
return self._m_has_next if hasattr(self, '_m_has_next') else None
@property
def value(self):
"""The 7-bit (base128) numeric value of this group."""
if hasattr(self, '_m_value'):
return self._m_value if hasattr(self, '_m_value') else None
self._m_value = (self.b & 127)
return self._m_value if hasattr(self, '_m_value') else None
@property
def len(self):
if hasattr(self, '_m_len'):
return self._m_len if hasattr(self, '_m_len') else None
self._m_len = len(self.groups)
return self._m_len if hasattr(self, '_m_len') else None
@property
def value(self):
"""Resulting value as normal integer."""
if hasattr(self, '_m_value'):
return self._m_value if hasattr(self, '_m_value') else None
self._m_value = (((((((self.groups[0].value + ((self.groups[1].value << 7) if self.len >= 2 else 0)) + ((self.groups[2].value << 14) if self.len >= 3 else 0)) + ((self.groups[3].value << 21) if self.len >= 4 else 0)) + ((self.groups[4].value << 28) if self.len >= 5 else 0)) + ((self.groups[5].value << 35) if self.len >= 6 else 0)) + ((self.groups[6].value << 42) if self.len >= 7 else 0)) + ((self.groups[7].value << 49) if self.len >= 8 else 0))
return self._m_value if hasattr(self, '_m_value') else None

View File

@ -154,6 +154,13 @@ class Options(optmanager.OptManager):
Understands k/m/g suffixes, i.e. 3m for 3 megabytes. Understands k/m/g suffixes, i.e. 3m for 3 megabytes.
""" """
) )
self.add_option(
"stream_websockets", bool, False,
"""
Stream WebSocket messages between client and server.
Messages are captured and cannot be modified.
"""
)
self.add_option( self.add_option(
"verbosity", int, 2, "verbosity", int, 2,
"Log verbosity." "Log verbosity."

View File

@ -273,7 +273,10 @@ class HttpLayer(base.Layer):
self.send_response(http.expect_continue_response) self.send_response(http.expect_continue_response)
request.headers.pop("expect") request.headers.pop("expect")
request.data.content = b"".join(self.read_request_body(request)) if f.request.stream:
f.request.data.content = None
else:
f.request.data.content = b"".join(self.read_request_body(request))
request.timestamp_end = time.time() request.timestamp_end = time.time()
except exceptions.HttpException as e: except exceptions.HttpException as e:
# We optimistically guess there might be an HTTP client on the # We optimistically guess there might be an HTTP client on the
@ -326,12 +329,8 @@ class HttpLayer(base.Layer):
f.request.scheme f.request.scheme
) )
def get_response():
self.send_request(f.request)
f.response = self.read_response_headers()
try: try:
get_response() self.send_request_headers(f.request)
except exceptions.NetlibException as e: except exceptions.NetlibException as e:
self.log( self.log(
"server communication error: %s" % repr(e), "server communication error: %s" % repr(e),
@ -357,7 +356,19 @@ class HttpLayer(base.Layer):
self.disconnect() self.disconnect()
self.connect() self.connect()
get_response() self.send_request_headers(f.request)
# This is taken out of the try except block because when streaming
# we can't send the request body while retrying as the generator gets exhausted
if f.request.stream:
chunks = self.read_request_body(f.request)
if callable(f.request.stream):
chunks = f.request.stream(chunks)
self.send_request_body(f.request, chunks)
else:
self.send_request_body(f.request, [f.request.data.content])
f.response = self.read_response_headers()
# call the appropriate script hook - this is an opportunity for # call the appropriate script hook - this is an opportunity for
# an inline script to set f.stream = True # an inline script to set f.stream = True

View File

@ -22,6 +22,16 @@ class Http1Layer(httpbase._HttpTransmissionLayer):
self.config.options._processed.get("body_size_limit") self.config.options._processed.get("body_size_limit")
) )
def send_request_headers(self, request):
headers = http1.assemble_request_head(request)
self.server_conn.wfile.write(headers)
self.server_conn.wfile.flush()
def send_request_body(self, request, chunks):
for chunk in http1.assemble_body(request.headers, chunks):
self.server_conn.wfile.write(chunk)
self.server_conn.wfile.flush()
def send_request(self, request): def send_request(self, request):
self.server_conn.wfile.write(http1.assemble_request(request)) self.server_conn.wfile.write(http1.assemble_request(request))
self.server_conn.wfile.flush() self.server_conn.wfile.flush()

View File

@ -487,14 +487,23 @@ class Http2SingleStreamLayer(httpbase._HttpTransmissionLayer, basethread.BaseThr
@detect_zombie_stream @detect_zombie_stream
def read_request_body(self, request): def read_request_body(self, request):
self.request_data_finished.wait() if not request.stream:
data = [] self.request_data_finished.wait()
while self.request_data_queue.qsize() > 0:
data.append(self.request_data_queue.get()) while True:
return data try:
yield self.request_data_queue.get(timeout=0.1)
except queue.Empty: # pragma: no cover
pass
if self.request_data_finished.is_set():
self.raise_zombie()
while self.request_data_queue.qsize() > 0:
yield self.request_data_queue.get()
break
self.raise_zombie()
@detect_zombie_stream @detect_zombie_stream
def send_request(self, message): def send_request_headers(self, request):
if self.pushed: if self.pushed:
# nothing to do here # nothing to do here
return return
@ -519,10 +528,10 @@ class Http2SingleStreamLayer(httpbase._HttpTransmissionLayer, basethread.BaseThr
self.server_stream_id = self.connections[self.server_conn].get_next_available_stream_id() self.server_stream_id = self.connections[self.server_conn].get_next_available_stream_id()
self.server_to_client_stream_ids[self.server_stream_id] = self.client_stream_id self.server_to_client_stream_ids[self.server_stream_id] = self.client_stream_id
headers = message.headers.copy() headers = request.headers.copy()
headers.insert(0, ":path", message.path) headers.insert(0, ":path", request.path)
headers.insert(0, ":method", message.method) headers.insert(0, ":method", request.method)
headers.insert(0, ":scheme", message.scheme) headers.insert(0, ":scheme", request.scheme)
priority_exclusive = None priority_exclusive = None
priority_depends_on = None priority_depends_on = None
@ -553,13 +562,24 @@ class Http2SingleStreamLayer(httpbase._HttpTransmissionLayer, basethread.BaseThr
self.raise_zombie() self.raise_zombie()
self.connections[self.server_conn].lock.release() self.connections[self.server_conn].lock.release()
@detect_zombie_stream
def send_request_body(self, request, chunks):
if self.pushed:
# nothing to do here
return
if not self.no_body: if not self.no_body:
self.connections[self.server_conn].safe_send_body( self.connections[self.server_conn].safe_send_body(
self.raise_zombie, self.raise_zombie,
self.server_stream_id, self.server_stream_id,
[message.content] chunks
) )
@detect_zombie_stream
def send_request(self, message):
self.send_request_headers(message)
self.send_request_body(message, [message.content])
@detect_zombie_stream @detect_zombie_stream
def read_response_headers(self): def read_response_headers(self):
self.response_arrived.wait() self.response_arrived.wait()

View File

@ -55,6 +55,7 @@ class WebSocketLayer(base.Layer):
return self._handle_unknown_frame(frame, source_conn, other_conn, is_server) return self._handle_unknown_frame(frame, source_conn, other_conn, is_server)
def _handle_data_frame(self, frame, source_conn, other_conn, is_server): def _handle_data_frame(self, frame, source_conn, other_conn, is_server):
fb = self.server_frame_buffer if is_server else self.client_frame_buffer fb = self.server_frame_buffer if is_server else self.client_frame_buffer
fb.append(frame) fb.append(frame)
@ -70,43 +71,51 @@ class WebSocketLayer(base.Layer):
self.flow.messages.append(websocket_message) self.flow.messages.append(websocket_message)
self.channel.ask("websocket_message", self.flow) self.channel.ask("websocket_message", self.flow)
def get_chunk(payload): if not self.flow.stream:
if len(payload) == length: def get_chunk(payload):
# message has the same length, we can reuse the same sizes if len(payload) == length:
pos = 0 # message has the same length, we can reuse the same sizes
for s in original_chunk_sizes: pos = 0
yield payload[pos:pos + s] for s in original_chunk_sizes:
pos += s yield payload[pos:pos + s]
pos += s
else:
# just re-chunk everything into 4kB frames
# header len = 4 bytes without masking key and 8 bytes with masking key
chunk_size = 4092 if is_server else 4088
chunks = range(0, len(payload), chunk_size)
for i in chunks:
yield payload[i:i + chunk_size]
frms = [
websockets.Frame(
payload=chunk,
opcode=frame.header.opcode,
mask=(False if is_server else 1),
masking_key=(b'' if is_server else os.urandom(4)))
for chunk in get_chunk(websocket_message.content)
]
if len(frms) > 0:
frms[-1].header.fin = True
else: else:
# just re-chunk everything into 10kB frames frms.append(websockets.Frame(
chunk_size = 10240 fin=True,
chunks = range(0, len(payload), chunk_size) opcode=websockets.OPCODE.CONTINUE,
for i in chunks: mask=(False if is_server else 1),
yield payload[i:i + chunk_size] masking_key=(b'' if is_server else os.urandom(4))))
frms = [ frms[0].header.opcode = message_type
websockets.Frame( frms[0].header.rsv1 = compressed_message
payload=chunk,
opcode=frame.header.opcode, for frm in frms:
mask=(False if is_server else 1), other_conn.send(bytes(frm))
masking_key=(b'' if is_server else os.urandom(4)))
for chunk in get_chunk(websocket_message.content)
]
if len(frms) > 0:
frms[-1].header.fin = True
else: else:
frms.append(websockets.Frame( other_conn.send(bytes(frame))
fin=True,
opcode=websockets.OPCODE.CONTINUE,
mask=(False if is_server else 1),
masking_key=(b'' if is_server else os.urandom(4))))
frms[0].header.opcode = message_type elif self.flow.stream:
frms[0].header.rsv1 = compressed_message other_conn.send(bytes(frame))
for frm in frms:
other_conn.send(bytes(frm))
return True return True

View File

@ -45,6 +45,7 @@ class WebSocketFlow(flow.Flow):
self.close_code = '(status code missing)' self.close_code = '(status code missing)'
self.close_message = '(message missing)' self.close_message = '(message missing)'
self.close_reason = 'unknown status code' self.close_reason = 'unknown status code'
self.stream = False
if handshake_flow: if handshake_flow:
self.client_key = websockets.get_client_key(handshake_flow.request.headers) self.client_key = websockets.get_client_key(handshake_flow.request.headers)

View File

@ -74,7 +74,7 @@ setup(
"ldap3>=2.2.0, <2.3", "ldap3>=2.2.0, <2.3",
"passlib>=1.6.5, <1.8", "passlib>=1.6.5, <1.8",
"pyasn1>=0.1.9, <0.3", "pyasn1>=0.1.9, <0.3",
"pyOpenSSL>=16.0, <17.1", "pyOpenSSL>=16.0,<17.2",
"pyparsing>=2.1.3, <2.3", "pyparsing>=2.1.3, <2.3",
"pyperclip>=1.5.22, <1.6", "pyperclip>=1.5.22, <1.6",
"requests>=2.9.1, <3", "requests>=2.9.1, <3",
@ -98,14 +98,14 @@ setup(
"pytest>=3.1, <4", "pytest>=3.1, <4",
"rstcheck>=2.2, <4.0", "rstcheck>=2.2, <4.0",
"sphinx_rtd_theme>=0.1.9, <0.3", "sphinx_rtd_theme>=0.1.9, <0.3",
"sphinx-autobuild>=0.5.2, <0.7", "sphinx-autobuild>=0.5.2, <0.8",
"sphinx>=1.3.5, <1.7", "sphinx>=1.3.5, <1.7",
"sphinxcontrib-documentedlist>=0.5.0, <0.7", "sphinxcontrib-documentedlist>=0.5.0, <0.7",
"tox>=2.3, <3", "tox>=2.3, <3",
], ],
'examples': [ 'examples': [
"beautifulsoup4>=4.4.1, <4.7", "beautifulsoup4>=4.4.1, <4.7",
"Pillow>=3.2, <4.2", "Pillow>=3.2,<4.3",
] ]
} }
) )

View File

@ -1,6 +1,7 @@
import traceback import traceback
import sys import sys
import time import time
import os
import pytest import pytest
from unittest import mock from unittest import mock
@ -183,6 +184,20 @@ class TestScriptLoader:
scripts = ["one", "one"] scripts = ["one", "one"]
) )
def test_script_deletion(self):
tdir = tutils.test_data.path("mitmproxy/data/addonscripts/")
with open(tdir + "/dummy.py", 'w') as f:
f.write("\n")
with taddons.context() as tctx:
sl = script.ScriptLoader()
tctx.master.addons.add(sl)
tctx.configure(sl, scripts=[tutils.test_data.path("mitmproxy/data/addonscripts/dummy.py")])
os.remove(tutils.test_data.path("mitmproxy/data/addonscripts/dummy.py"))
tctx.invoke(sl, "tick")
assert not tctx.options.scripts
assert not sl.addons
def test_order(self): def test_order(self):
rec = tutils.test_data.path("mitmproxy/data/addonscripts/recorder") rec = tutils.test_data.path("mitmproxy/data/addonscripts/recorder")
sc = script.ScriptLoader() sc = script.ScriptLoader()

View File

@ -29,3 +29,9 @@ def test_simple():
f = tflow.tflow(resp=True) f = tflow.tflow(resp=True)
f.response.headers["content-length"] = "invalid" f.response.headers["content-length"] = "invalid"
tctx.cycle(sa, f) tctx.cycle(sa, f)
tctx.configure(sa, stream_websockets = True)
f = tflow.twebsocketflow()
assert not f.stream
sa.websocket_start(f)
assert f.stream

View File

@ -1,52 +1,31 @@
from unittest import mock
import pytest import pytest
from mitmproxy.contentviews import protobuf from mitmproxy.contentviews import protobuf
from mitmproxy.test import tutils from mitmproxy.test import tutils
from . import full_eval from . import full_eval
data = tutils.test_data.push("mitmproxy/contentviews/test_protobuf_data/")
def test_view_protobuf_request(): def test_view_protobuf_request():
v = full_eval(protobuf.ViewProtobuf()) v = full_eval(protobuf.ViewProtobuf())
p = tutils.test_data.path("mitmproxy/data/protobuf01") p = data.path("protobuf01")
with mock.patch('mitmproxy.contentviews.protobuf.ViewProtobuf.is_available'): with open(p, "rb") as f:
with mock.patch('subprocess.Popen') as n: raw = f.read()
m = mock.Mock() content_type, output = v(raw)
attrs = {'communicate.return_value': (b'1: "3bbc333c-e61c-433b-819a-0b9a8cc103b8"', True)} assert content_type == "Protobuf"
m.configure_mock(**attrs) assert output == [[('text', '1: 3bbc333c-e61c-433b-819a-0b9a8cc103b8')]]
n.return_value = m with pytest.raises(ValueError, matches="Failed to parse input."):
v(b'foobar')
with open(p, "rb") as f:
data = f.read()
content_type, output = v(data)
assert content_type == "Protobuf"
assert output[0] == [('text', b'1: "3bbc333c-e61c-433b-819a-0b9a8cc103b8"')]
m.communicate = mock.MagicMock()
m.communicate.return_value = (None, None)
with pytest.raises(ValueError, matches="Failed to parse input."):
v(b'foobar')
def test_view_protobuf_availability(): @pytest.mark.parametrize("filename", ["protobuf02", "protobuf03"])
with mock.patch('subprocess.Popen') as n: def test_format_pbuf(filename):
m = mock.Mock() path = data.path(filename)
attrs = {'communicate.return_value': (b'libprotoc fake version', True)} with open(path, "rb") as f:
m.configure_mock(**attrs) input = f.read()
n.return_value = m with open(path + "-decoded") as f:
assert protobuf.ViewProtobuf().is_available() expected = f.read()
m = mock.Mock() assert protobuf.format_pbuf(input) == expected
attrs = {'communicate.return_value': (b'command not found', True)}
m.configure_mock(**attrs)
n.return_value = m
assert not protobuf.ViewProtobuf().is_available()
def test_view_protobuf_fallback():
with mock.patch('subprocess.Popen.communicate') as m:
m.side_effect = OSError()
v = full_eval(protobuf.ViewProtobuf())
with pytest.raises(NotImplementedError, matches='protoc not found'):
v(b'foobar')

View File

@ -0,0 +1,65 @@
1 {
1: tpbuf
4 {
1: Person
2 {
1: name
3: 1
4: 2
5: 9
}
2 {
1: id
3: 2
4: 2
5: 5
}
2 {
1 {
12: 1818845549
}
3: 3
4: 1
5: 9
}
2 {
1: phone
3: 4
4: 3
5: 11
6: .Person.PhoneNumber
}
3 {
1: PhoneNumber
2 {
1: number
3: 1
4: 2
5: 9
}
2 {
1: type
3: 2
4: 1
5: 14
6: .Person.PhoneType
7: HOME
}
}
4 {
1: PhoneType
2 {
1: MOBILE
2: 0
}
2 {
1: HOME
2: 1
}
2 {
1: WORK
2: 2
}
}
}
}

View File

@ -0,0 +1 @@
<18> <20>

View File

@ -0,0 +1,4 @@
2 {
3: 3840
4: 2160
}

View File

@ -1,7 +1,6 @@
from unittest import mock from unittest import mock
import pytest import pytest
from mitmproxy import exceptions
from mitmproxy.test import tflow from mitmproxy.test import tflow
from mitmproxy.net.http import http1 from mitmproxy.net.http import http1
from mitmproxy.net.tcp import TCPClient from mitmproxy.net.tcp import TCPClient
@ -108,9 +107,5 @@ class TestStreaming(tservers.HTTPProxyTest):
r = p.request("post:'%s/p/200:b@10000'" % self.server.urlbase) r = p.request("post:'%s/p/200:b@10000'" % self.server.urlbase)
assert len(r.content) == 10000 assert len(r.content) == 10000
if streaming: # request with 10000 bytes
with pytest.raises(exceptions.HttpReadDisconnect): # as the assertion in assert_write fails assert p.request("post:'%s/p/200':b@10000" % self.server.urlbase)
# request with 10000 bytes
p.request("post:'%s/p/200':b@10000" % self.server.urlbase)
else:
assert p.request("post:'%s/p/200':b@10000" % self.server.urlbase)

View File

@ -14,6 +14,7 @@ import mitmproxy.net
from ...net import tservers as net_tservers from ...net import tservers as net_tservers
from mitmproxy import exceptions from mitmproxy import exceptions
from mitmproxy.net.http import http1, http2 from mitmproxy.net.http import http1, http2
from pathod.language import generators
from ... import tservers from ... import tservers
from ....conftest import requires_alpn from ....conftest import requires_alpn
@ -166,7 +167,8 @@ class _Http2TestBase:
end_stream=None, end_stream=None,
priority_exclusive=None, priority_exclusive=None,
priority_depends_on=None, priority_depends_on=None,
priority_weight=None): priority_weight=None,
streaming=False):
if headers is None: if headers is None:
headers = [] headers = []
if end_stream is None: if end_stream is None:
@ -182,7 +184,8 @@ class _Http2TestBase:
) )
if body: if body:
h2_conn.send_data(stream_id, body) h2_conn.send_data(stream_id, body)
h2_conn.end_stream(stream_id) if not streaming:
h2_conn.end_stream(stream_id)
wfile.write(h2_conn.data_to_send()) wfile.write(h2_conn.data_to_send())
wfile.flush() wfile.flush()
@ -862,3 +865,120 @@ class TestConnectionTerminated(_Http2Test):
assert connection_terminated_event.error_code == 5 assert connection_terminated_event.error_code == 5
assert connection_terminated_event.last_stream_id == 42 assert connection_terminated_event.last_stream_id == 42
assert connection_terminated_event.additional_data == b'foobar' assert connection_terminated_event.additional_data == b'foobar'
@requires_alpn
class TestRequestStreaming(_Http2Test):
@classmethod
def handle_server_event(cls, event, h2_conn, rfile, wfile):
if isinstance(event, h2.events.ConnectionTerminated):
return False
elif isinstance(event, h2.events.DataReceived):
data = event.data
assert data
h2_conn.close_connection(error_code=5, last_stream_id=42, additional_data=data)
wfile.write(h2_conn.data_to_send())
wfile.flush()
return True
@pytest.mark.parametrize('streaming', [True, False])
def test_request_streaming(self, streaming):
class Stream:
def requestheaders(self, f):
f.request.stream = streaming
self.master.addons.add(Stream())
h2_conn = self.setup_connection()
body = generators.RandomGenerator("bytes", 100)[:]
self._send_request(
self.client.wfile,
h2_conn,
headers=[
(':authority', "127.0.0.1:{}".format(self.server.server.address[1])),
(':method', 'GET'),
(':scheme', 'https'),
(':path', '/'),
],
body=body,
streaming=True
)
done = False
connection_terminated_event = None
self.client.rfile.o.settimeout(2)
while not done:
try:
raw = b''.join(http2.read_raw_frame(self.client.rfile))
events = h2_conn.receive_data(raw)
for event in events:
if isinstance(event, h2.events.ConnectionTerminated):
connection_terminated_event = event
done = True
except:
break
if streaming:
assert connection_terminated_event.additional_data == body
else:
assert connection_terminated_event is None
@requires_alpn
class TestResponseStreaming(_Http2Test):
@classmethod
def handle_server_event(cls, event, h2_conn, rfile, wfile):
if isinstance(event, h2.events.ConnectionTerminated):
return False
elif isinstance(event, h2.events.RequestReceived):
data = generators.RandomGenerator("bytes", 100)[:]
h2_conn.send_headers(event.stream_id, [
(':status', '200'),
('content-length', '100')
])
h2_conn.send_data(event.stream_id, data)
wfile.write(h2_conn.data_to_send())
wfile.flush()
return True
@pytest.mark.parametrize('streaming', [True, False])
def test_response_streaming(self, streaming):
class Stream:
def responseheaders(self, f):
f.response.stream = streaming
self.master.addons.add(Stream())
h2_conn = self.setup_connection()
self._send_request(
self.client.wfile,
h2_conn,
headers=[
(':authority', "127.0.0.1:{}".format(self.server.server.address[1])),
(':method', 'GET'),
(':scheme', 'https'),
(':path', '/'),
]
)
done = False
self.client.rfile.o.settimeout(2)
data = None
while not done:
try:
raw = b''.join(http2.read_raw_frame(self.client.rfile))
events = h2_conn.receive_data(raw)
for event in events:
if isinstance(event, h2.events.DataReceived):
data = event.data
done = True
except:
break
if streaming:
assert data
else:
assert data is None

View File

@ -155,7 +155,13 @@ class TestSimple(_WebSocketTest):
wfile.write(bytes(frame)) wfile.write(bytes(frame))
wfile.flush() wfile.flush()
def test_simple(self): @pytest.mark.parametrize('streaming', [True, False])
def test_simple(self, streaming):
class Stream:
def websocket_start(self, f):
f.stream = streaming
self.master.addons.add(Stream())
self.setup_connection() self.setup_connection()
frame = websockets.Frame.from_file(self.client.rfile) frame = websockets.Frame.from_file(self.client.rfile)
@ -328,3 +334,32 @@ class TestInvalidFrame(_WebSocketTest):
frame = websockets.Frame.from_file(self.client.rfile) frame = websockets.Frame.from_file(self.client.rfile)
assert frame.header.opcode == 15 assert frame.header.opcode == 15
assert frame.payload == b'foobar' assert frame.payload == b'foobar'
class TestStreaming(_WebSocketTest):
@classmethod
def handle_websockets(cls, rfile, wfile):
wfile.write(bytes(websockets.Frame(opcode=websockets.OPCODE.TEXT, payload=b'server-foobar')))
wfile.flush()
@pytest.mark.parametrize('streaming', [True, False])
def test_streaming(self, streaming):
class Stream:
def websocket_start(self, f):
f.stream = streaming
self.master.addons.add(Stream())
self.setup_connection()
frame = None
if not streaming:
with pytest.raises(exceptions.TcpDisconnect): # Reader.safe_read get nothing as result
frame = websockets.Frame.from_file(self.client.rfile)
assert frame is None
else:
frame = websockets.Frame.from_file(self.client.rfile)
assert frame
assert self.master.state.flows[1].messages == [] # Message not appended as the final frame isn't received

View File

@ -239,13 +239,28 @@ class TestHTTP(tservers.HTTPProxyTest, CommonMixin):
p.request("get:'%s'" % response) p.request("get:'%s'" % response)
def test_reconnect(self): def test_reconnect(self):
req = "get:'%s/p/200:b@1:da'" % self.server.urlbase req = "get:'%s/p/200:b@1'" % self.server.urlbase
p = self.pathoc() p = self.pathoc()
class MockOnce:
call = 0
def mock_once(self, http1obj, req):
self.call += 1
if self.call == 1:
raise exceptions.TcpDisconnect
else:
headers = http1.assemble_request_head(req)
http1obj.server_conn.wfile.write(headers)
http1obj.server_conn.wfile.flush()
with p.connect(): with p.connect():
assert p.request(req) with mock.patch("mitmproxy.proxy.protocol.http1.Http1Layer.send_request_headers",
# Server has disconnected. Mitmproxy should detect this, and reconnect. side_effect=MockOnce().mock_once, autospec=True):
assert p.request(req) # Server disconnects while sending headers but mitmproxy reconnects
assert p.request(req) resp = p.request(req)
assert resp
assert resp.status_code == 200
def test_get_connection_switching(self): def test_get_connection_switching(self):
req = "get:'%s/p/200:b@1'" req = "get:'%s/p/200:b@1'"
@ -1072,6 +1087,23 @@ class TestProxyChainingSSLReconnect(tservers.HTTPUpstreamProxyTest):
proxified to an upstream http proxy, we need to send the CONNECT proxified to an upstream http proxy, we need to send the CONNECT
request again. request again.
""" """
class MockOnce:
call = 0
def mock_once(self, http1obj, req):
self.call += 1
if self.call == 2:
headers = http1.assemble_request_head(req)
http1obj.server_conn.wfile.write(headers)
http1obj.server_conn.wfile.flush()
raise exceptions.TcpDisconnect
else:
headers = http1.assemble_request_head(req)
http1obj.server_conn.wfile.write(headers)
http1obj.server_conn.wfile.flush()
self.chain[0].tmaster.addons.add(RequestKiller([1, 2])) self.chain[0].tmaster.addons.add(RequestKiller([1, 2]))
self.chain[1].tmaster.addons.add(RequestKiller([1])) self.chain[1].tmaster.addons.add(RequestKiller([1]))
@ -1086,7 +1118,10 @@ class TestProxyChainingSSLReconnect(tservers.HTTPUpstreamProxyTest):
assert len(self.chain[0].tmaster.state.flows) == 1 assert len(self.chain[0].tmaster.state.flows) == 1
assert len(self.chain[1].tmaster.state.flows) == 1 assert len(self.chain[1].tmaster.state.flows) == 1
req = p.request("get:'/p/418:b\"content2\"'") with mock.patch("mitmproxy.proxy.protocol.http1.Http1Layer.send_request_headers",
side_effect=MockOnce().mock_once, autospec=True):
req = p.request("get:'/p/418:b\"content2\"'")
assert req.status_code == 502 assert req.status_code == 502
assert len(self.proxy.tmaster.state.flows) == 2 assert len(self.proxy.tmaster.state.flows) == 2

View File

@ -0,0 +1,138 @@
import React, { Component } from "react"
import PropTypes from "prop-types"
import { connect } from "react-redux"
import { update as updateOptions } from "../../ducks/options"
import { Key } from "../../utils"
const stopPropagation = e => {
if (e.keyCode !== Key.ESC) {
e.stopPropagation()
}
}
BooleanOption.PropTypes = {
value: PropTypes.bool.isRequired,
onChange: PropTypes.func.isRequired,
}
function BooleanOption({ value, onChange, ...props }) {
return (
<div className="checkbox">
<label>
<input type="checkbox"
checked={value}
onChange={e => onChange(e.target.checked)}
{...props}
/>
Enable
</label>
</div>
)
}
StringOption.PropTypes = {
value: PropTypes.string.isRequired,
onChange: PropTypes.func.isRequired,
}
function StringOption({ value, onChange, ...props }) {
return (
<input type="text"
value={value || ""}
onChange={e => onChange(e.target.value)}
{...props}
/>
)
}
function Optional(Component) {
return function ({ onChange, ...props }) {
return <Component
onChange={x => onChange(x ? x : null)}
{...props}
/>
}
}
NumberOption.PropTypes = {
value: PropTypes.number.isRequired,
onChange: PropTypes.func.isRequired,
}
function NumberOption({ value, onChange, ...props }) {
return (
<input type="number"
value={value}
onChange={(e) => onChange(parseInt(e.target.value))}
{...props}
/>
)
}
ChoicesOption.PropTypes = {
value: PropTypes.string.isRequired,
onChange: PropTypes.func.isRequired,
}
function ChoicesOption({ value, onChange, choices, ...props }) {
return (
<select
onChange={(e) => onChange(e.target.value)}
selected={value}
{...props}
>
{ choices.map(
choice => (
<option key={choice} value={choice}>{choice}</option>
)
)}
</select>
)
}
StringSequenceOption.PropTypes = {
value: PropTypes.string.isRequired,
onChange: PropTypes.func.isRequired,
}
function StringSequenceOption({ value, onChange, ...props }) {
const height = Math.max(value.length, 1)
return <textarea
rows={height}
value={value.join("\n")}
onChange={e => onChange(e.target.value.split("\n").filter(x => x.trim()))}
{...props}
/>
}
const Options = {
"bool": BooleanOption,
"str": StringOption,
"int": NumberOption,
"optional str": Optional(StringOption),
"sequence of str": StringSequenceOption,
}
function PureOption({ choices, type, value, onChange, name }) {
let Opt, props = {}
if (choices) {
Opt = ChoicesOption;
props.choices = choices
} else {
Opt = Options[type]
}
if (Opt !== BooleanOption) {
props.className = "form-control"
}
return <Opt
name={name}
value={value}
onChange={onChange}
onKeyDown={stopPropagation}
{...props}
/>
}
export default connect(
(state, { name }) => ({
...state.options[name],
...state.ui.optionsEditor[name]
}),
(dispatch, { name }) => ({
onChange: value => dispatch(updateOptions(name, value))
})
)(PureOption)

View File

@ -1,7 +1,22 @@
import React, { Component } from 'react' import React, { Component } from "react"
import { connect } from 'react-redux' import { connect } from "react-redux"
import * as modalAction from '../../ducks/ui/modal' import * as modalAction from "../../ducks/ui/modal"
import Option from './OptionMaster' import Option from "./Option"
function PureOptionHelp({help}){
return <div className="help-block small">{help}</div>;
}
const OptionHelp = connect((state, {name}) => ({
help: state.options[name].help,
}))(PureOptionHelp);
function PureOptionError({error}){
if(!error) return null;
return <div className="small text-danger">{error}</div>;
}
const OptionError = connect((state, {name}) => ({
error: state.ui.optionsEditor[name] && state.ui.optionsEditor[name].error
}))(PureOptionError);
class PureOptionModal extends Component { class PureOptionModal extends Component {
@ -27,19 +42,21 @@ class PureOptionModal extends Component {
</div> </div>
<div className="modal-body"> <div className="modal-body">
<div className="container-fluid"> <div className="form-horizontal">
{ {
Object.keys(options).sort() options.map(name =>
.map((key, index) => { <div key={name} className="form-group">
let option = options[key]; <div className="col-xs-6">
return ( <label htmlFor={name}>{name}</label>
<Option <OptionHelp name={name}/>
key={index} </div>
name={key} <div className="col-xs-6">
option={option} <Option name={name}/>
/>) <OptionError name={name}/>
}) </div>
} </div>
)
}
</div> </div>
</div> </div>
@ -52,7 +69,7 @@ class PureOptionModal extends Component {
export default connect( export default connect(
state => ({ state => ({
options: state.options options: Object.keys(state.options)
}), }),
{ {
hideModal: modalAction.hideModal, hideModal: modalAction.hideModal,

View File

@ -1,6 +1,6 @@
export const ConnectionState = { export const ConnectionState = {
INIT: Symbol("init"), INIT: Symbol("init"),
FETCHING: Symbol("fetching"), // WebSocket is established, but still startFetching resources. FETCHING: Symbol("fetching"), // WebSocket is established, but still fetching resources.
ESTABLISHED: Symbol("established"), ESTABLISHED: Symbol("established"),
ERROR: Symbol("error"), ERROR: Symbol("error"),
OFFLINE: Symbol("offline"), // indicates that there is no live (websocket) backend. OFFLINE: Symbol("offline"), // indicates that there is no live (websocket) backend.

View File

@ -1,14 +1,12 @@
import { fetchApi } from '../utils' import { fetchApi } from "../utils"
import * as optionActions from './ui/option' import * as optionsEditorActions from "./ui/optionsEditor"
import _ from "lodash"
export const RECEIVE = 'OPTIONS_RECEIVE' export const RECEIVE = 'OPTIONS_RECEIVE'
export const UPDATE = 'OPTIONS_UPDATE' export const UPDATE = 'OPTIONS_UPDATE'
export const REQUEST_UPDATE = 'REQUEST_UPDATE' export const REQUEST_UPDATE = 'REQUEST_UPDATE'
export const UNKNOWN_CMD = 'OPTIONS_UNKNOWN_CMD'
const defaultState = { const defaultState = {}
}
export default function reducer(state = defaultState, action) { export default function reducer(state = defaultState, action) {
switch (action.type) { switch (action.type) {
@ -27,18 +25,22 @@ export default function reducer(state = defaultState, action) {
} }
} }
export function update(options) { let sendUpdate = (option, value, dispatch) => {
fetchApi.put('/options', { [option]: value }).then(response => {
if (response.status === 200) {
dispatch(optionsEditorActions.updateSuccess(option))
} else {
response.text().then(error => {
dispatch(optionsEditorActions.updateError(option, error))
})
}
})
}
sendUpdate = _.throttle(sendUpdate, 700, { leading: true, trailing: true })
export function update(option, value) {
return dispatch => { return dispatch => {
let option = Object.keys(options)[0] dispatch(optionsEditorActions.startUpdate(option, value))
dispatch({ type: optionActions.OPTION_UPDATE_START, option, value: options[option] }) sendUpdate(option, value, dispatch);
fetchApi.put('/options', options).then(response => {
if (response.status === 200) {
dispatch({ type: optionActions.OPTION_UPDATE_SUCCESS, option})
} else {
response.text().then( text => {
dispatch({type: optionActions.OPTION_UPDATE_ERROR, error: text, option})
})
}
})
} }
} }

View File

@ -3,7 +3,6 @@ import { fetchApi } from '../utils'
export const RECEIVE = 'SETTINGS_RECEIVE' export const RECEIVE = 'SETTINGS_RECEIVE'
export const UPDATE = 'SETTINGS_UPDATE' export const UPDATE = 'SETTINGS_UPDATE'
export const REQUEST_UPDATE = 'REQUEST_UPDATE' export const REQUEST_UPDATE = 'REQUEST_UPDATE'
export const UNKNOWN_CMD = 'SETTINGS_UNKNOWN_CMD'
const defaultState = { const defaultState = {

View File

@ -2,12 +2,12 @@ import { combineReducers } from 'redux'
import flow from './flow' import flow from './flow'
import header from './header' import header from './header'
import modal from './modal' import modal from './modal'
import option from './option' import optionsEditor from './optionsEditor'
// TODO: Just move ducks/ui/* into ducks/? // TODO: Just move ducks/ui/* into ducks/?
export default combineReducers({ export default combineReducers({
flow, flow,
header, header,
modal, modal,
option, optionsEditor,
}) })

View File

@ -1,6 +1,7 @@
import { Key } from "../../utils" import { Key } from "../../utils"
import { selectTab } from "./flow" import { selectTab } from "./flow"
import * as flowsActions from "../flows" import * as flowsActions from "../flows"
import * as modalActions from "./modal"
export function onKeyDown(e) { export function onKeyDown(e) {
@ -46,7 +47,11 @@ export function onKeyDown(e) {
break break
case Key.ESC: case Key.ESC:
dispatch(flowsActions.select(null)) if(getState().ui.modal.activeModal){
dispatch(modalActions.hideModal())
} else {
dispatch(flowsActions.select(null))
}
break break
case Key.LEFT: { case Key.LEFT: {

View File

@ -0,0 +1,73 @@
import { HIDE_MODAL } from "./modal"
export const OPTION_UPDATE_START = 'UI_OPTION_UPDATE_START'
export const OPTION_UPDATE_SUCCESS = 'UI_OPTION_UPDATE_SUCCESS'
export const OPTION_UPDATE_ERROR = 'UI_OPTION_UPDATE_ERROR'
const defaultState = {
/* optionName -> {isUpdating, value (client-side), error} */
}
export default function reducer(state = defaultState, action) {
switch (action.type) {
case OPTION_UPDATE_START:
return {
...state,
[action.option]: {
isUpdate: true,
value: action.value,
error: false,
}
}
case OPTION_UPDATE_SUCCESS:
return {
...state,
[action.option]: undefined
}
case OPTION_UPDATE_ERROR:
let val = state[action.option].value;
if (typeof(val) === "boolean") {
// If a boolean option errs, reset it to its previous state to be less confusing.
// Example: Start mitmweb, check "add_upstream_certs_to_client_chain".
val = !val;
}
return {
...state,
[action.option]: {
value: val,
isUpdating: false,
error: action.error
}
}
case HIDE_MODAL:
return {}
default:
return state
}
}
export function startUpdate(option, value) {
return {
type: OPTION_UPDATE_START,
option,
value,
}
}
export function updateSuccess(option) {
return {
type: OPTION_UPDATE_SUCCESS,
option,
}
}
export function updateError(option, error) {
return {
type: OPTION_UPDATE_ERROR,
option,
error,
}
}