使用Proguard for Scala AWS Lambda
我有一个关于proguard和
scala aws lambda函数一起使用的问题.我创建了一个非常简单的aws lambda函数,如下所示:
package example import scala.collection.JavaConverters._ import com.amazonaws.services.lambda.runtime.events.S3Event import com.amazonaws.services.lambda.runtime.Context object Main extends App { def kinesisEventHandler(event: S3Event,context: Context): Unit = { val result = event.getRecords.asScala.map(m => m.getS3.getObject.getKey) println(result) } } 我导入了以下包: "com.amazonaws" % "aws-lambda-java-core" % "1.1.0" "com.amazonaws" % "aws-lambda-java-events" % "1.3.0" 当我创建一个胖jar时,它的大小为13 MB,并且与预期的AWS Lambda函数一样工作(仅用于测试输出). 13 MB是非常大的,所以我尝试了proguard缩小jar,但它不工作,我总是遇到问题,两天后,我没有更多的想法如何解决这个问题. 这是我的proguard配置: -injars "/Users/x/x/x/AWS_Lambda/target/scala-2.12/lambda-demo-assembly-1.0.jar" -libraryjars "/Users/x/x/x/AWS_Lambda/lib_managed/jars/org.scala-lang/scala-library/scala-library-2.12.1.jar" -libraryjars "/Users/x/x/x/AWS_Lambda/lib_managed/jars/com.amazonaws/aws-lambda-java-core/aws-lambda-java-core-1.1.0.jar" -libraryjars "/Library/Java/JavaVirtualMachines/jdk1.8.0_102.jdk/Contents/Home/jre/lib/rt.jar" -libraryjars "/Users/x/x/x/AWS_Lambda/lib_managed/jars/com.amazonaws/aws-java-sdk-s3/aws-java-sdk-s3-1.11.0.jar" -libraryjars "/Users/x/x/x/AWS_Lambda/lib_managed/jars/com.amazonaws/aws-lambda-java-events/aws-lambda-java-events-1.3.0.jar" -outjars "/Users/x/x/x/AWS_Lambda/target/scala-2.12/proguard/lambda-demo_2.12-1.0.jar" -dontoptimize -dontobfuscate -dontnote -dontwarn -keepattributes SourceFile,LineNumberTable # Preserve all annotations. -keepattributes *Annotation* # Preserve all public applications. -keepclasseswithmembers public class * { public static void main(java.lang.String[]); } # Preserve some classes and class members that are accessed by means of # introspection. -keep class * implements org.xml.sax.EntityResolver -keepclassmembers class * { ** MODULE$; } -keepclassmembernames class scala.concurrent.forkjoin.ForkJoinPool { long eventCount; int workerCounts; int runControl; scala.concurrent.forkjoin.ForkJoinPool$WaitQueueNode syncStack; scala.concurrent.forkjoin.ForkJoinPool$WaitQueueNode spareStack; } -keepclassmembernames class scala.concurrent.forkjoin.ForkJoinWorkerThread { int base; int sp; int runState; } -keepclassmembernames class scala.concurrent.forkjoin.ForkJoinTask { int status; } -keepclassmembernames class scala.concurrent.forkjoin.LinkedTransferQueue { scala.concurrent.forkjoin.LinkedTransferQueue$PaddedAtomicReference head; scala.concurrent.forkjoin.LinkedTransferQueue$PaddedAtomicReference tail; scala.concurrent.forkjoin.LinkedTransferQueue$PaddedAtomicReference cleanMe; } # Preserve some classes and class members that are accessed by means of # introspection in the Scala compiler library,if it is processed as well. #-keep class * implements jline.Completor #-keep class * implements jline.Terminal #-keep class scala.tools.nsc.Global #-keepclasseswithmembers class * { # <init>(scala.tools.nsc.Global); #} #-keepclassmembers class * { # *** scala_repl_value(); # *** scala_repl_result(); #} # Preserve all native method names and the names of their classes. -keepclasseswithmembernames,includedescriptorclasses class * { native <methods>; } # Preserve the special static methods that are required in all enumeration # classes. -keepclassmembers,allowoptimization enum * { public static **[] values(); public static ** valueOf(java.lang.String); } # Explicitly preserve all serialization members. The Serializable interface # is only a marker interface,so it wouldn't save them. # You can comment this out if your application doesn't use serialization. # If your code contains serializable classes that have to be backward # compatible,please refer to the manual. -keepclassmembers class * implements java.io.Serializable { static final long serialVersionUID; static final java.io.ObjectStreamField[] serialPersistentFields; private void writeObject(java.io.ObjectOutputStream); private void readObject(java.io.ObjectInputStream); java.lang.Object writeReplace(); java.lang.Object readResolve(); } # Your application may contain more items that need to be preserved; # typically classes that are dynamically created using Class.forName: # -keep public class mypackage.MyClass # -keep public interface mypackage.MyInterface # -keep public class * implements mypackage.MyInterface -keep,includedescriptorclasses class example.** { *; } -keepclassmembers class * { <init>(...); } 当我运行这个我的jar非常小(大约5 MB),但是当我启动lambda时,我得到以下错误 "errorMessage": "java.lang.NoSuchMethodException: com.amazonaws.services.s3.event.S3EventNotification.parseJson(java.lang.String)","errorType": "lambdainternal.util.ReflectUtil$ReflectException" 我看了一下这个类,并且proguard删除了这个函数.当我更改配置以保留此文件时,我在另一个文件中遇到另一个问题. 有人已经使用了scala AWS lambda函数的proguard并且设置良好或者知道这个问题吗?还有其他好的解决方案来缩小罐子尺寸吗? 最好, 解决方法
老实说,13MB并不是那么大.但是,尽管我确信这对于Scala开发人员来说会被认为是异端,但我在Java中创建了一个等效的方法,它有点超过7MB.我没有尝试使用Proguard – 它可能会进一步缩小.
那就是你正在使用的S3Event包.如果你看一下因为那个软件包而包含的内容,就会带来大量额外的东西–SQS,SNS,Dynamo等等.最终这是最重要的部分.我做了一点测试,尝试消除除了aws-lambda-java-core之外的所有库,而是使用了JsonPath.这使我的jar文件达到了458K. 我的代码如下.我知道这不是斯卡拉,但也许你可以从中得到一些想法.关键是尽可能多地消除AWS库.当然,如果您想要在Lambda中执行除打印键之外的任何操作,您还需要引入更多AWS库,这些库的大小大约为7MB. import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.util.List; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestStreamHandler; import com.jayway.jsonpath.JsonPath; public class S3EventLambdaHandler implements RequestStreamHandler { public void handleRequest(InputStream inputStream,OutputStream outputStream,Context context) { try { List<String> keys = JsonPath.read(inputStream,"$.Records[*].s3.object.key"); for( String nextKey: keys ) System.out.println(nextKey); } catch( IOException ioe ) { context.getLogger().log("caught IOException reading input stream"); } } } (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |