Scalastyle standard configuration <![CDATA[/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */]]> <![CDATA[100]]> <![CDATA[2]]> true <![CDATA[[A-Z][A-Za-z]*]]> <![CDATA[(config|[A-Z][A-Za-z]*)]]> <![CDATA[^[a-z][A-Za-z]*$]]> <![CDATA[10]]> <![CDATA[true]]> <![CDATA[true]]> ARROW, EQUALS, ELSE, TRY, CATCH, FINALLY, LARROW, RARROW ARROW, EQUALS, COMMA, COLON, IF, ELSE, DO, WHILE, FOR, MATCH, TRY, CATCH, FINALLY, LARROW, RARROW ^FunSuite[A-Za-z]*$ Tests must extend org.apache.spark.SparkFunSuite instead. ^println$ <![CDATA[Are you sure you want to println? If yes, wrap the code block with // scalastyle:off println println(...) // scalastyle:on println]]> spark(.sqlContext)?.sparkContext.hadoopConfiguration <![CDATA[ Are you sure that you want to use sparkContext.hadoopConfiguration? In most cases, you should use spark.sessionState.newHadoopConf() instead, so that the hadoop configurations specified in Spark session configuration will come into effect. If you must use sparkContext.hadoopConfiguration, wrap the code block with // scalastyle:off hadoopconfiguration spark.sparkContext.hadoopConfiguration... // scalastyle:on hadoopconfiguration ]]> @VisibleForTesting <![CDATA[ @VisibleForTesting causes classpath issues. Please note this in the java doc instead (SPARK-11615). ]]> Runtime\.getRuntime\.addShutdownHook <![CDATA[ Are you sure that you want to use Runtime.getRuntime.addShutdownHook? In most cases, you should use ShutdownHookManager.addShutdownHook instead. If you must use Runtime.getRuntime.addShutdownHook, wrap the code block with // scalastyle:off runtimeaddshutdownhook Runtime.getRuntime.addShutdownHook(...) // scalastyle:on runtimeaddshutdownhook ]]> mutable\.SynchronizedBuffer <![CDATA[ Are you sure that you want to use mutable.SynchronizedBuffer? In most cases, you should use java.util.concurrent.ConcurrentLinkedQueue instead. If you must use mutable.SynchronizedBuffer, wrap the code block with // scalastyle:off mutablesynchronizedbuffer mutable.SynchronizedBuffer[...] // scalastyle:on mutablesynchronizedbuffer ]]> Class\.forName <![CDATA[ Are you sure that you want to use Class.forName? In most cases, you should use Utils.classForName instead. If you must use Class.forName, wrap the code block with // scalastyle:off classforname Class.forName(...) // scalastyle:on classforname ]]> Await\.result <![CDATA[ Are you sure that you want to use Await.result? In most cases, you should use ThreadUtils.awaitResult instead. If you must use Await.result, wrap the code block with // scalastyle:off awaitresult Await.result(...) // scalastyle:on awaitresult ]]> Await\.ready <![CDATA[ Are you sure that you want to use Await.ready? In most cases, you should use ThreadUtils.awaitReady instead. If you must use Await.ready, wrap the code block with // scalastyle:off awaitready Await.ready(...) // scalastyle:on awaitready ]]> (\.toUpperCase|\.toLowerCase)(?!(\(|\(Locale.ROOT\))) <![CDATA[ Are you sure that you want to use toUpperCase or toLowerCase without the root locale? In most cases, you should use toUpperCase(Locale.ROOT) or toLowerCase(Locale.ROOT) instead. If you must use toUpperCase or toLowerCase without the root locale, wrap the code block with // scalastyle:off caselocale .toUpperCase .toLowerCase // scalastyle:on caselocale ]]> throw new \w+Error\( <![CDATA[ Are you sure that you want to throw Error? In most cases, you should use appropriate Exception instead. If you must throw Error, wrap the code block with // scalastyle:off throwerror throw new XXXError(...) // scalastyle:on throwerror ]]> JavaConversions Instead of importing implicits in scala.collection.JavaConversions._, import scala.collection.JavaConverters._ and use .asScala / .asJava methods org\.apache\.commons\.lang\. Use Commons Lang 3 classes (package org.apache.commons.lang3.*) instead of Commons Lang 2 (package org.apache.commons.lang.*) extractOpt Use jsonOption(x).map(.extract[T]) instead of .extractOpt[T], as the latter is slower. java,scala,3rdParty,spark javax?\..* scala\..* (?!org\.apache\.spark\.).* org\.apache\.spark\..* COMMA \)\{ <![CDATA[ Single Space between ')' and `{`. ]]> (?m)^(\s*)/[*][*].*$(\r|)\n^\1 [*] Use Javadoc style indentation for multiline comments case[^\n>]*=>\s*\{ Omit braces in case clauses. <![CDATA[^[a-z][A-Za-z0-9]*$]]> <![CDATA[sun._,java.awt._]]> 800> 30 10 50 <![CDATA[30]]> -1,0,1,2,3