Overview
原文地址:https://doc.scrapy.org/en/latest/topics/architecture.html This document describes the architecture of Scrapy and how its components interact. Overview
The following diagram shows an overview of the Scrapy architecture with its components and an outline of the data flow that takes place inside the system (shown by the red arrows). A brief description of the components is included below with links for more detailed information about them. The data flow is also described below. Data flow
![]() The data flow in Scrapy is controlled by the execution engine,and goes like this:
Components
Scrapy Engine
The engine is responsible for controlling the data flow between all components of the system,and triggering events when certain actions occur. See the??section above for more details. Scheduler
The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them. Downloader
The Downloader is responsible for fetching web pages and feeding them to the engine which,in turn,feeds them to the spiders. Spiders
Spiders are custom classes written by Scrapy users to parse responses and extract items (aka scraped items) from them or additional requests to follow. For more information see?. Item Pipeline
The Item Pipeline is responsible for processing the items once they have been extracted (or scraped) by the spiders. Typical tasks include cleansing,validation and persistence (like storing the item in a database). For more information see?. Downloader middlewares
Downloader middlewares are specific hooks that sit between the Engine and the Downloader and process requests when they pass from the Engine to the Downloader,and responses that pass from Downloader to the Engine. Use a Downloader middleware if you need to do one of the following:
For more information see?. Spider middlewares
Spider middlewares are specific hooks that sit between the Engine and the Spiders and are able to process spider input (responses) and output (items and requests). Use a Spider middleware if you need to
For more information see?. Event-driven networking
Scrapy is written with?,a popular event-driven networking framework for Python. Thus,it’s implemented using a non-blocking (aka asynchronous) code for concurrency. (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |