サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
2024年ランキング
cwiki.apache.org
This section provides several recommendations on how to make your web application and Apache Tomcat as a whole to start up faster. General Before we continue to specific tips and tricks, the general advice is that if Tomcat hangs or is not responsive, you have to perform diagnostics. That is to take several thread dumps to see what Tomcat is really doing. See Troubleshooting and Diagnostics page f
KIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum StatusCurrent state: Accepted Discussion thread: here JIRA: KAFKA-9119 - Getting issue details... STATUS Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast). MotivationCurrently, Kafka uses ZooKeeper to store its metadata about partitions and brokers, and to elect a bro
SummaryPossible Remote Code Execution when alwaysSelectFullNamespace is true (either by user or a plugin like Convention Plugin) and then: results are used with no namespace and in same time, its upper package have no or wildcard namespace and similar to results, same possibility when using url tag which doesn’t have value and action set and in same time, its upper package have no or wildcard name
ProblemIt is possible to perform a RCE attack with a malicious Content-Type value. If the Content-Type value isn't valid an exception is thrown which is then used to display an error message to a user. SolutionIf you are using Jakarta based file upload Multipart parser, upgrade to Apache Struts version 2.3.32 or 2.5.10.1. You can also switch to a different implementation of the Multipart parser. B
FINAL This proposal is now complete and has been submitted for a VOTE. MXNet: Apache Incubator Proposal Abstract MXNet is a Flexible and Efficient Library for Deep Learning Proposal MXNet is an open-source deep learning framework that allows you to define, train, and deploy deep neural networks on a wide array of devices, from cloud infrastructure to mobile devices. It is highly scalable, allowing
NetBeansProposal Status This proposal has been discussed at http://s.apache.org/netbeans_proposal and NetBeans has been accepted for incubation at Apache, vote result at https://s.apache.org/netbeans_vote The next steps are being discussed at https://lists.apache.org/[email protected] - see http://s.apache.org/netbeans_please_join for how to subscribe. Abstract NetBeans is an open
Please help us keep this FAQ up-to-date. If there is an answer that you think can be improved, please help improve it. If you look for an answer that isn't here, and later figure it out, please add it. You don't need permission, it's a wiki. Exactly-Once ProcessingWhat is the difference between an "idempotent producer" and a "transactional producer"?An idempotent producer guarantees that single me
SummaryRemote Code Execution can be performed via method: prefix when Dynamic Method Invocation is enabled.
Apache Beam Abstract Apache Beam is an open source, unified model and set of language-specific SDKs for defining and executing data processing workflows, and also data ingestion and integration flows, supporting Enterprise Integration Patterns (EIPs) and Domain Specific Languages (DSLs). Dataflow pipelines simplify the mechanics of large-scale batch and streaming data processing and can run on a n
Apache Beam Apache Dataflow proposal has been renamed to Apache Beam (combination of Batch and strEAM). The proposal page has moved to BeamProposal.
Kafka's mirroring feature makes it possible to maintain a replica of an existing Kafka cluster. The following diagram shows how to use the MirrorMaker tool to mirror a source Kafka cluster into a target (mirror) Kafka cluster. The tool uses a Kafka consumer to consume messages from the source cluster, and re-publishes those messages to the local (target) cluster using an embedded Kafka producer. T
NOTICE An initial version of this page has been added to the official documentation here. It still needs a lot of work. However, because it is generated from the code may be more up to date in some cases. Contributions to improve this documentation is welcome and encouraged. Some tasks are tracked here: KAFKA-3360 - Getting issue details... STATUS IntroductionThis document covers the protocol impl
OverviewThis lists all supported data types in Hive. See Type System in the Tutorial for additional information. For data types supported by HCatalog, see: HCatLoader Data TypesHCatStorer Data TypesHCatRecord Data TypesNumeric TypesTINYINT (1-byte signed integer, from -128 to 127)SMALLINT (2-byte signed integer, from -32,768 to 32,767)INT/INTEGER (4-byte signed integer, from -2,147,483,648 to 2,14
Debugging EhCache's SizeOfEngine Enable Tomcat Logging in Eclipse http://wiki.eclipse.org/WTP_Tomcat_FAQ#How_do_I_enable_the_JULI_logging_in_a_Tomcat_5.5_Server_instance.3F Enable size of specific logging From http://ehcache.org/documentation/configuration/cache-size#sizing-of-cached-entries Set the net.sf.ehcache.sizeof.verboseDebugLogging system property to true. Enable debug logs on net.sf.ehca
Want to appear on this page? Send a quick description of your organization and usage to the mailing list or to @apachekafka or @jaykreps on twitter and we'll add you. CompaniesLinkedIn - Apache Kafka is used at LinkedIn for activity stream data and operational metrics. This powers various products like LinkedIn Newsfeed, LinkedIn Today in addition to our offline analytics systems like Hadoop.Yahoo
The backend Perl code must conform to the following style guidelines. If you find any code which doesn't conform, please fix it. These requirements are intended to maintain consistent, organized, professional code. Indentation Proper indentation is very important. Just because the code lines up properly in your editor of choice, does not mean it will line up properly for someone else working on t
Writing GenericUDAFs: A Tutorial User-Defined Aggregation Functions (UDAFs) are an excellent way to integrate advanced data-processing into Hive. Hive allows two varieties of UDAFs: simple and generic. Simple UDAFs, as the name implies, are rather simple to write, but incur performance penalties because of the use of Java Reflection, and do not allow features such as variable-length argument lists
Spring BootAvailable as of Camel 2.15 Spring Boot component provides auto-configuration for Apache Camel. Our opinionated auto-configuration of the Camel context auto-detects Camel routes available in the Spring context and registers the key Camel utilities (like producer template, consumer template and the type converter) as beans. Maven users will need to add the following dependency to their po
This page describes a proposed Kafka Improvement Proposal (KIP) process for proposing a major change to Kafka. SubpagesKafka Streams sub page (lists ongoing/incomplete KIPs)Kafka Streams KIP Overview (lists released KIPs)Dormant/inactive KIPsDiscarded KIPsKIP discussion recordings StartedIf this is your first time contributing: Sign up for the Developer mailing list [email protected] . The inst
OverviewWe are proposing an enhanced hash join algorithm called “hybrid hybrid grace hash join”. We can benefit from this feature as illustrated below: The query will not fail even if the estimated memory requirement is slightly wrong. Expensive garbage collection overhead can be avoided when hash table grows. Join execution using a Map join operator even though the small table doesn't fit in memo
Java recommendation for SolrFor verions from 6.0.0 through 9.0.0, I would recommend Java 11. For 9.1.0 or later, I would recommend Java 17. Not sure I can recommend running any Solr version below 6.0.0. Java 17 is noticeably faster than Java 11 in my small-scale experiments. In the past I would have strongly recommended never using an IBM Java. I don't know if that is still a good idea or not.
AbstractApache Hadoop is a framework for the distributed processing of large data sets using clusters of computers typically composed of commodity hardware. Over last few years Apache Hadoop has become the de facto platform for distributed data processing using commodity hardware. Apache Hive is a popular SQL interface for data processing using Apache Hadoop. User submitted SQL query is converted
Powered by a free Atlassian Confluence Open Source Project License granted to Apache Software Foundation. Evaluate Confluence today. Powered by Atlassian Confluence 7.19.28 Printed by Atlassian Confluence 7.19.28 Report a bug Atlassian News
FlexJS™ is the name for a next-generation Flex SDK that has the goal of allowing applications developed in MXML and ActionScript to not only run in the Flash/AIR runtimes, but also to run natively in the browser without Flash, on mobile devices as a PhoneGap/Cordova application, and in embedded JS environments such as Chromium Embedded Framework used in the Adobe Common Extensibility Platform . Fl
This document describes the support of statistics for Hive tables (see HIVE-33). MotivationStatistics such as the number of rows of a table or partition and the histograms of a particular interesting column are important in many ways. One of the key use cases of statistics is query optimization. Statistics serve as the input to the cost functions of the optimizer so that it can compare different p
Spark InstallationFollow instructions to install Spark: YARN Mode: http://spark.apache.org/docs/latest/running-on-yarn.html Standalone Mode: https://spark.apache.org/docs/latest/spark-standalone.html Hive on Spark supports Spark on YARN mode as default. For the installation perform the following tasks: Install Spark (either download pre-built Spark, or build assembly from source). Install/build a
1. IntroductionWe propose modifying Hive to add Spark as a third execution backend(HIVE-7292), parallel to MapReduce and Tez. Spark is an open-source data analytics cluster computing framework that’s built outside of Hadoop's two-stage MapReduce paradigm but on top of HDFS. Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be cr
This document describes the Hive user configuration properties (sometimes called parameters, variables, or options), and notes which releases introduced new properties. The canonical list of configuration properties is managed in the HiveConf Java class, so refer to the HiveConf.java file for a complete list of configuration properties available in your Hive release. For information about how to u
This page describes the different clients supported by HiveServer2. Other documentation for HiveServer2 includes: HiveServer2 OverviewSetting Up HiveServer2Hive Configuration Properties: HiveServer2 Beeline – Command Line ShellHiveServer2 supports a command shell Beeline that works with HiveServer2. It's a JDBC client that is based on the SQLLine CLI (http://sqlline.sourceforge.net/). There’s de
次のページ
このページを最初にブックマークしてみませんか?
『Dashboard - Apache Software Foundation』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く