TProactor 结合了Reactor 和proactor
TProactorThe proposed solution (TProactor) was developed and implemented at Terabit P/L [6 ]. The solution has two alternative implementations,one in C++ and one in Java. The C++ version was built using ACE cross-platform low-level primitives and has a common unified async proactive interface on all platforms. The main TProactor components are the Engine and WaitStrategy interfaces. Engine manages the async operations lifecycle. WaitStrategy manages concurrency strategies. WaitStrategy depends on Engine and the two always work in pairs. Interfaces between Engine and WaitStrategy are strongly defined. Engines and waiting strategies are implemented as pluggable class-drivers (for the full list of all implemented Engines and corresponding WaitStrategies,see Appendix 1). TProactor is a highly configurable solution. It internally implements three engines (POSIX AIO,SUN AIO and Emulated AIO) and hides six different waiting strategies,based on an asynchronous kernel API (for POSIX- this is not efficient right now due to internal POSIX AIO API problems) and synchronous Unix With a set of mutually interchangeable "lego-style" Engines and WaitStrategies,a developer can choose the appropriate internal mechanism (engine and waiting strategy) at run time by setting appropriate configuration parameters. These settings may be specified according to specific requirements,such as the number of connections,scalability,and the targeted OS. If the operating system supports async API,a developer may use the true async approach,otherwise the user can opt for an emulated async solutions built on different sync waiting strategies. All of those strategies are hidden behind an emulated async fa?ade. For an HTTP server running on Sun Solaris,for example,the /dev/poll or In terms of performance,our tests show that emulating from reactive to proactive does not impose any overhead—it can be faster,but not slower. According to our test results,the TProactor gives on average of up to 10-35 % better performance (measured in terms of both throughput and response times) than the reactive model in the standard ACE Reactor implementation on various UNIX/Linux platforms. On Windows it gives the same performance as standard ACE Proactor. Performance comparison (JAVA versus C++ versus C#). In addition to C++,as we also implemented TProactor in Java. As for JDK version 1.4,Java provides only the sync-based approach that is logically similar to C Figures 1 and 2 chart the transfer rate in bits/sec versus the number of connections. These charts represent comparison results for a simple echo-server built on standard ACE Reactor,using RedHat Linux 9.0,TProactor C++ and Java (IBM 1.4JVM) on Microsoft's Windows and RedHat Linux9.0,and a C# echo-server running on the Windows operating system. Performance of native AIO APIs is represented by "Async"-marked curves; by emulated AIO (TProactor)—AsyncE curves; and by TP_Reactor—Synch curves. All implementations were bombarded by the same client application—a continuous stream of arbitrary fixed sized messages via N connections. The full set of tests was performed on the same hardware. Tests on different machines proved that relative results are consistent. User code example The following is the skeleton of a simple TProactor-based Java echo-server. In a nutshell,the developer only has to implement the two interfaces: class EchoServerProtocol implements AsynchHandler { AsynchChannel achannel = null; EchoServerProtocol( Demultiplexor m,SelectableChannel channel ) throws Exception { this.achannel = new AsynchChannel( m,this,channel ); } public void start() throws Exception { // called after construction System.out.println( Thread.currentThread().getName() + ": EchoServer protocol started" ); achannel.read( buffer); } public void onReadCompleted( OpRead opRead ) throws Exception { if ( opRead.getError() != null ) { // handle error,do clean-up if needed System.out.println( "EchoServer::readCompleted: " + opRead.getError().toString()); achannel.close(); return; } if ( opRead.getBytesCompleted () <= 0) { System.out.println( "EchoServer::readCompleted: Peer closed " + opRead.getBytesCompleted(); achannel.close(); return; } ByteBuffer buffer = opRead.getBuffer(); achannel.write(buffer); } public void onWriteCompleted(OpWrite opWrite) throws Exception { // logically similar to onReadCompleted ... } } ConclusionTProactor provides a common,flexible,and configurable solution for multi-platform high- performance communications development. All of the problems and complexities mentioned in Appendix 2,are hidden from the developer. It is clear from the charts that C++ is still the preferable approach for high performance communication solutions,but Java on Linux comes quite close. However,the overall Java performance was weakened by poor results on Windows. One reason for that may be that the Java 1.4 nio package is based on Note. All tests for Java are performed on "raw" buffers (java.nio.ByteBuffer) without data processing. Taking into account the latest activities to develop robust AIO on Linux [9 ],we can conclude that Linux Kernel API (io_xxxx set of system calls) should be more scalable in comparison with POSIX standard,but still not portable. In this case,TProactor with new Engine/Wait Strategy pair,based on native LINUX AIO can be easily implemented to overcome portability issues and to cover Linux native AIO with standard ACE Proactor interface. Appendix IEngines and waiting strategies implemented in TProactor
Appendix IIAll sync waiting strategies can be divided into two groups:
Let us describe some common logical problems for those groups:
Resources[1] Douglas C. Schmidt,Stephen D. Huston "C++ Network Programming." 2002,Addison-Wesley ISBN 0-201-60464-7 [2] W. Richard Stevens "UNIX Network Programming" vol. 1 and 2,1999,Prentice Hill,ISBN 0-13- 490012-X [3] Douglas C. Schmidt,Michael Stal,Hans Rohnert,Frank Buschmann "Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects,Volume 2" Wiley & Sons,NY 2000 [4] INFO: Socket Overlapped I/O Versus Blocking/Non-blocking Mode. Q181611. Microsoft Knowledge Base Articles. [5] Microsoft MSDN. I/O Completion Ports. [6] TProactor (ACE compatible Proactor). [7] JavaDoc java.nio.channels [8] JavaDoc Java.nio.channels.spi Class SelectorProvider [9] Linux AIO development See Also: Ian Barile "I/O Multiplexing & Scalable Socket Servers",2004 February,DDJ Further reading on event handling The Adaptive Communication Environment Terabit Solutions About the authorsAlex Libman has been programming for 15 years. During the past 5 years his main area of interest is pattern-oriented multiplatform networked programming using C++ and Java. He is big fan and contributor of ACE. Vlad Gilbourd works as a computer consultant,but wishes to spend more time listening jazz :) As a hobby,he started and runs www.corporatenews.com.au website. (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |