swift – AlamoFire异步completionHandler for JSON请求
使用AlamoFire框架后,我注意到completionHandler在主线程上运行。我想知道下面的代码是否是一个良好的做法,在完成处理程序中创建一个Core Data导入任务:
Alamofire.request(.GET,"http://myWebSite.com",parameters: parameters) .responseJSON(options: .MutableContainers) { (_,_,JSON,error) -> Void in dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH,0),{ () -> Void in if let err = error{ println("Error:(error)") return; } if let jsonArray = JSON as? [NSArray]{ let importer = CDImporter(incomingArray: jsonArray entity: "Artist",map: artistEntityMap); } }); }
这是一个很好的问题。您的方法是完全有效的。然而,Alamofire实际上可以帮助你更精简这一点。
您的示例代码分派队列细目 在示例代码中,您将在以下分派队列之间跳转: > NSURLSession分派队列 正如你可以看到,你跳过了所有的地方。让我们来看看一个替代方法利用Alamofire内的强大功能。 Alamofire响应分派队列 Alamofire有自己的低级处理内置的最佳方法。最终被所有自定义响应序列化程序调用的单个响应方法如果您选择使用它,则支持自定义调度队列。 虽然GCD在调度队列之间跳跃是惊人的,你想避免跳转到一个繁忙的队列(例如主线程)。通过消除在异步处理过程中跳回主线程,您可以潜在地加快速度。下面的示例演示如何使用Alamofire逻辑直接开箱即用。 Alamofire 1.x let queue = dispatch_queue_create("com.cnoon.manager-response-queue",DISPATCH_QUEUE_CONCURRENT) let request = Alamofire.request(.GET,"http://httpbin.org/get",parameters: ["foo": "bar"]) request.response( queue: queue,serializer: Request.JSONResponseSerializer(options: .AllowFragments),completionHandler: { _,_ in // You are now running on the concurrent `queue` you created earlier. println("Parsing JSON on thread: (NSThread.currentThread()) is main thread: (NSThread.isMainThread())") // Validate your JSON response and convert into model objects if necessary println(JSON) // To update anything on the main thread,just jump back on like so. dispatch_async(dispatch_get_main_queue()) { println("Am I back on the main thread: (NSThread.isMainThread())") } } ) Alamofire 3.x(Swift 2.2和2.3) let queue = dispatch_queue_create("com.cnoon.manager-response-queue",responseSerializer: Request.JSONResponseSerializer(options: .AllowFragments),completionHandler: { response in // You are now running on the concurrent `queue` you created earlier. print("Parsing JSON on thread: (NSThread.currentThread()) is main thread: (NSThread.isMainThread())") // Validate your JSON response and convert into model objects if necessary print(response.result.value) // To update anything on the main thread,just jump back on like so. dispatch_async(dispatch_get_main_queue()) { print("Am I back on the main thread: (NSThread.isMainThread())") } } ) Alamofire 4.x(Swift 3) let queue = DispatchQueue(label: "com.cnoon.response-queue",qos: .utility,attributes: [.concurrent]) Alamofire.request("http://httpbin.org/get",parameters: ["foo": "bar"]) .response( queue: queue,responseSerializer: DataRequest.jsonResponseSerializer(),completionHandler: { response in // You are now running on the concurrent `queue` you created earlier. print("Parsing JSON on thread: (Thread.current) is main thread: (Thread.isMainThread)") // Validate your JSON response and convert into model objects if necessary print(response.result.value) // To update anything on the main thread,just jump back on like so. DispatchQueue.main.async { print("Am I back on the main thread: (Thread.isMainThread)") } } ) Alamofire调度队列细分 下面是这种方法涉及的不同调度队列的细分。 > NSURLSession分派队列 概要 通过消除到主分派队列的第一跳,您已经消除了潜在的瓶颈,以及使您的整个请求和处理异步。真棒! 说到这一点,我不能强调足够熟悉Alamofire真正如何工作的内部是多么重要。你永远不知道什么时候你可能会找到一些真正可以帮助你改进自己的代码。 (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |