博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
等保2.0基本要求下载_要求2.0
阅读量:2518 次
发布时间:2019-05-11

本文共 11718 字,大约阅读时间需要 39 分钟。

等保2.0基本要求下载

Every now and then the Requests project gets bored of fixing bugs and decides to break a whole ton of your code. But it doesn’t look good when we put it like that, so instead we call it a ‘major release’ and sell it as being full of shiny new features. Unfortunately it turns out that people complain if we break their code and don’t provide a nice way to find out what we broke.

时不时地,Requests项目厌倦了修复错误,并决定破坏整个代码。 但是,当我们这样说时,它看起来并不好,所以我们将其称为“主要版本”,并以充满闪亮的新功能的形式出售。 不幸的是,事实证明人们会抱怨我们是否破坏了他们的代码,并且没有提供一种很好的方法来找出我们破坏了什么。

So we provide with every release. The changelist is aimed at providing an easy to scan list of the changes so that busy downstream authors can quickly identify the things that will cause them pain and fix them. That’s great, but often people want a slightly more detailed explanation and description of what we did and why we did it.

因此,我们在每个版本中都提供 。 变更列表旨在提供易于扫描的变更列表,以便繁忙的下游作者可以快速识别会导致痛苦的问题并进行修复。 很好,但是通常人们希望对我们所做的事情以及我们为什么这么做进行更详细的解释和描述。

Well, Requests just , and that’s a major release right there. To help make your life easier, I’ve whipped up this post: a lengthier explanation of what has changed in Requests to bring us up to 2.0. I’ll tackle each change in order of their presence in the changelist, and link to the relevant issues in Github for people who want to see what fool convinced Kenneth it was a good idea.

好吧,Requests刚刚 ,而那是那里的主要版本。 为了使您的生活更轻松,我整理了这篇文章:对“使我们升级到2.0版”的要求进行了较长的解释。 我将按其在变更列表中的出现顺序处理每个变更,并链接到Github中的相关问题,以供希望了解傻瓜说服Kenneth的人使用。

Let’s do it!

我们开始做吧!

标头字典键始终是本地字符串 (Header Dictionary Keys are always Native Strings)

Previously, Requests would always encode any header keys you gave it to bytestrings on both Python 2 and Python 3. This was in principle fine. In practice, we had a couple of problems

以前,在Python 2和Python 3上,请求都将始终对您给它提供给字节串的任何标头密钥进行编码。 在实践中,我们遇到了两个问题

  • It broke overriding headers that are otherwise automatically set by Requests, such as Content-Type.
  • It could cause unpleasant UnicodeDecodeErrors if you had unicode header values on Python 2.
  • It didn’t work well with how urllib3 expects its inputs.
  • 它破坏了由请求自动设置的覆盖标头,例如Content-Type。
  • 如果您在Python 2上具有unicode标头值,则可能会导致UnicodeDecodeError
  • urllib3如何期望其输入效果urllib3

So we now coerce to the native string type on each platform. Note that if you provide non-native string keys (unicode on Python 2, bytestring on Python 3), we will assume the encoding is UTF-8 when we convert the type. Be warned.

因此,我们现在强制每个平台上的本机字符串类型。 请注意,如果您提供非本机字符串键(Python 2上为unicode,Python 3上为字节串),则在转换类型时将假定编码为UTF-8。 被警告。

代理URL现在必须具有显式方案 (Proxy URLs must now have an explicit scheme)

Merged in ,

在 , 合并

You used to be able to provide the proxy dictionary with proxies that didn’t have a scheme, like this:

您曾经能够向代理字典提供没有方案的代理,例如:

{
{
'http''http' : : '192.0.0.1:8888''192.0.0.1:8888' , , 'https''https' : : '192.0.0.2:8888''192.0.0.2:8888' }}

This was useful for convenience, but it turned out to be a secret source of bugs. In the absence of a scheme, Requests would assume you wanted to use the scheme of the key, so that the above dictionary was interpreted as:

为方便起见,这很有用,但事实证明它是错误的秘密来源。 在没有方案的情况下,Requests会假定您要使用密钥的方案,因此上述字典被解释为:

It turns out that this is often not what people wanted. Rather than continue to guess, as of 2.0 Requests will throw a MissingScheme exception if such a proxy URL is used. This includes any proxies source from environment variables.

事实证明,这通常不是人们想要的。 从2.0版本MissingScheme如果使用这样的代理URL,则请求将抛出MissingScheme异常,而不是继续猜测。 这包括来自环境变量的任何代理资源。

超时更好 (Timeouts Are Better)

Fixed downstream from us in urllib3.

在urllib3中修复了我们下游的问题。

Timeouts have been a source of pain for a lot of people for quite some time. They tend to behave in unintuitive ways, and we ended up adding notes to the documentation to attempt to fight this problem.

一段时间以来,超时已成为很多人的痛苦之源。 他们倾向于以不直观的方式行事,我们最终在文档中添加了注释以尝试解决此问题。

However, thanks to some sweet work done in urllib3, you now get better control over timeouts.

但是,由于在urllib3中做了一些轻松的工作,您现在可以更好地控制超时。

When stream=True, the timeout value now applies only to the connection attempt, not to any of the actual data download. When stream=false, we apply the timeout value to the connection process, and then to the data download.

stream=True ,超时值现在仅适用于连接尝试,不适用于任何实际数据下载。 当stream=false ,我们将超时值应用于连接过程,然后应用于数据下载。

To be clear, that means that this:

需要明确的是,这意味着:

>>> r = requests.get(url, timeout=5, stream=False)>>> r = requests.get(url, timeout=5, stream=False)

Could take up to 10 seconds to execute: 5 seconds will be the maximum wait for connection, and 5 seconds will be the maximum wait for a read to return.

最多可能需要10秒才能执行:5秒将是连接的最大等待时间,而5秒将是读取返回的最大等待时间。

现在,RequestException是IOError的子类 (RequestException is now a subclass of IOError)

This is fairly simple. The are pretty clear on this point:

这很简单。 在这一点上非常清楚:

Raised when an error is detected that doesn’t fall in any of the other categories.

在检测到不属于任何其他类别的错误时引发。

Conceptually, RequestsException should not be a subclass of RuntimeError, it should be a subclass of IOError. So now it is.

从概念上讲, RequestsException不应是RuntimeError的子类,而应是IOError的子类。 现在是了。

向PreparedRequest对象添加了新方法 (Added new method to PreparedRequest objects)

We do a lot of internal copying of PreparedRequest objects, so there was a fair amount of redundant code in the library. We added the PreparedRequest.copy() method to clean that up, and it appeared to be sufficiently useful that it’s now part of the public API.

我们对PreparedRequest对象进行了大量内部复制,因此库中存在大量冗余代码。 我们添加了PreparedRequest.copy()方法来进行清理,它似乎非常有用,现在已成为公共API的一部分。

允许准备带有会话上下文的请求 (Allow preparing of Requests with Session context)

Proposed in ,

在 , 提出

This involved adding a new method to Session objects: Session.prepare_request(). This method takes a Request object and turns it into a PreparedRequest, while adding data specific to a single Session, e.g. any relevant cookie data. This has been a fairly highly requested feature since Kenneth added the PreparedRequest functionality in 1.0.

这涉及向Session对象添加一个新方法: Session.prepare_request() 。 此方法接受一个Request对象并将其转换为PreparedRequest ,同时添加特定于单个Session数据,例如任何相关的cookie数据。 自从Kenneth在1.0中添加PreparedRequest功能以来,这一直是一个非常受用户欢迎的功能。

The new primary PreparedRequest workflow is:

新的主要PreparedRequest工作流程是:

This provides all the many benefits of Requests sessions for your PreparedRequests.

这为您的PreparedRequest提供了Requests会话的所有许多好处。

扩展了HTTPAdapter子类接口 (Extended the HTTPAdapter subclass interface)

Implemented as part of the proxy improvements mentioned later.

作为稍后提到的代理改进的一部分实现。

We have a HTTPAdapter.add_headers() method for adding HTTP headers to any request being sent through a Transport Adapter. As part of the extended work on proxies, we’ve added a new method, HTTPAdapter.proxy_headers(), that does the equivalent thing for requests being sent through proxies. This is particularly useful for requests that use the CONNECT verb to tunnel HTTPS data through proxies, as it enables them to specify headers that should be sent to the proxy, not the downstream target.

我们有一个HTTPAdapter.add_headers()方法,用于将HTTP标头添加到通过传输适配器发送的任何请求中。 作为代理扩展工作的一部分,我们添加了一个新方法HTTPAdapter.proxy_headers()HTTPAdapter.proxy_headers()通过代理发送的请求执行等效的操作。 这对于使用CONNECT动词通过代理隧道传输HTTPS数据的请求特别有用,因为它使它们能够指定应发送到代理而不是下游目标的标头。

It’s expected that most users will never worry about this function, but it is a useful extension to the subclassing interface of the HTTPAdapter.

可以预期,大多数用户都不会担心此功能,但是它是对HTTPAdapter的子类化接口的有用扩展。

更好地处理分块编码错误 (Better Handling of Chunked Encoding Errors)

Identified by many issues, but the catalyst was ,.

被许多问题所确定,但催化剂是 , 。

It turns out that a distressingly large number of websites report that they will be using chunked encoding (by setting Transfer-Encoding: chunked in the HTTP headers), but then send all the data as one blob. I’ve actually touched on this .

事实证明,令人痛苦的大量网站报告说他们将使用分块编码(通过在HTTP标头中设置Transfer-Encoding: chunked ),然后将所有数据作为一个Blob发送。 实际上,我在上已经谈到了这一点。

Anyway, when that happens we used to throw an ugly httplib.IncompleteRead exception. We now catch that, and instead throw the much nicer requests.ChunkedEncodingError instead. Far better.

无论如何,当发生这种情况时,我们通常会抛出一个难看的httplib.IncompleteRead异常。 现在我们捕获了该错误,而是抛出了更好的requests.ChunkedEncodingError .ChunkedEncodingError相反。 好得多。

现在可以更好地处理无效的转义百分比序列 (Invalid Percent-Escape Sequences Now Better Handled)

Proposed in ,.

在 , 提出。

This is fairly simple. If Requests encountered a URL that contained an invalid percent-escape sequence, such as the clearly invalid http://%zz/, we used to throw a ValueError moaning about an invalid literal for base 16. That, while true, was unhelpful. We now throw a requests.InvalidURL exception instead.

这很简单。 如果“请求”遇到的URL包含无效的转义百分比序列,例如明显无效的http://%zz/ ,则我们通常会抛出ValueError抱怨有关基数16的无效文字。这虽然是正确的,但却无济于事。 现在,我们抛出一个requests.InvalidURL异常。

更正某些原因短语 (Correct Some Reason Phrases)

Proposed and fixed by .

由提出和解决。

We had an invalid reason phrase for the HTTP 208 response code. The correct phrase is Already Reported, but we were using IM Used. We fixed that up, and added the HTTP 226 status code whose reason phrase actually is IM Used.

对于HTTP 208响应代码,我们使用了无效的原因短语。 正确的短语是“ Already Reported ,但我们正在使用IM Used 。 我们对此进行了修复,并添加了HTTP 226状态码,其原因短语实际上是IM Used

大大改善了代理支持 (Vastly Improved Proxy Support)

Proposed many many times, I wrote about it,.

经过多次提议,我写 , 。

HTTPS proxies used to be totally broken: you could just never assume they worked. Thanks to some phenomenal work on urllib3 by a number of awesome people, we can now announce support for the HTTP CONNECT verb, and as a result support for HTTPS and proxies.

HTTPS代理曾经被完全破坏:您永远无法假设它们有效。 由于许多了不起的人们对urllib3进行了urllib3的工作,我们现在可以宣布支持HTTP CONNECT动词,从而支持HTTPS和代理。

This is a huge positive for us, and I’m delighted it made it in. Special thanks go to , , and for their great work getting this in place.

这对我们来说是一个巨大的积极因素,我很高兴它取得了成功。特别感谢 , , 和所做的出色工作。

其他错误修复 (Miscellaneous Bug Fixes)

We also fixed a number of bugs. In no particular order, they are:

我们还修复了许多错误。 按以下顺序,它们是:

  • Cookies are now correctly sent on responses to 401 messages, and any 401s received that set cookies now have those cookies persisted.
  • We only select chunked encoding only when we legitimately don’t know how large a file is, instead of when we have a zero length file.
  • Mixed case schemes are now supported throughout Requests, including when mounting Transport Adapters.
  • We have a much more robust infrastructure for streaming downloads, which should now actually run to completion.
  • We now collect environment proxies from more locations, such as the Windows registry.
  • We have a few minor assorted cookies fixes: nothing dramatic.
  • We no longer reuse PreparedRequest objects on redirects.
  • Auth settings in .netrc files no longer override explicit auth values: instead it’s the other way around.
  • Cookies that specify port numbers in their host field are now correctly parsed.
  • You can perform streaming uploads with BytesIO objects now.
  • 现在,根据对401消息的响应,可以正确发送Cookie,并且收到的任何设置cookie的401现在都将这些cookie保留下来。
  • 仅当我们合理地不知道文件有多大时才选择分块编码,而不是在文件长度为零时才选择分块编码。
  • 现在,在整个请求中都支持混合大小写方案,包括在安装传输适配器时。
  • 我们拥有用于流式下载的更强大的基础架构,现在应该可以实际运行到完成。
  • 现在,我们从更多位置(例如Windows注册表)收集环境代理。
  • 我们修复了一些次要的Cookie:没有任何戏剧性。
  • 我们不再在重定向上重用PreparedRequest对象。
  • .netrc文件中的身份验证设置不再覆盖显式身份验证值:相反,这是另一种方法。
  • 现在可以正确解析在其主机字段中指定端口号的cookie。
  • 您现在可以使用BytesIO对象执行流式上传。

摘要 (Summary)

Requests 2.0 is an awesome release. In particular, the proxy and timeout improvements are a massive win. 2.0 has involved a lot of work from a ton of contributors, and coincides with Requests passing 5 million downloads. This is definitely another major milestone. So thanks for all your continuing support! On behalf of the Requests project, I want to say that you’re excellent, and we love you all.

Requests 2.0是一个了不起的版本。 特别是,代理和超时的改进是一个巨大的胜利。 2.0涉及大量贡献者的大量工作,恰逢Requests传递了500万次下载。 这绝对是另一个重要的里程碑。 因此,感谢您一直以来的支持! 我谨代表Requests项目,您很棒,我们爱你们所有人。

翻译自:

等保2.0基本要求下载

转载地址:http://qjqwd.baihongyu.com/

你可能感兴趣的文章
MTK android 设置里 "关于手机" 信息参数修改
查看>>
单变量微积分笔记6——线性近似和二阶近似
查看>>
补几天前的读书笔记
查看>>
HDU 1829/POJ 2492 A Bug's Life
查看>>
CKplayer:视频推荐和分享插件设置
查看>>
CentOS系统将UTC时间修改为CST时间
查看>>
redis常见面试题
查看>>
导航控制器的出栈
查看>>
玩转CSS3,嗨翻WEB前端,CSS3伪类元素详解/深入浅出[原创][5+3时代]
查看>>
iOS 9音频应用播放音频之播放控制暂停停止前进后退的设置
查看>>
Delphi消息小记
查看>>
HNOI2016
查看>>
JVM介绍
查看>>
将PHP数组输出为HTML表格
查看>>
Java中的线程Thread方法之---suspend()和resume() 分类: ...
查看>>
经典排序算法回顾:选择排序,快速排序
查看>>
BZOJ2213 [Poi2011]Difference 【乱搞】
查看>>
c# 对加密的MP4文件进行解密
查看>>
AOP面向切面编程C#实例
查看>>
Win form碎知识点
查看>>