Wikipedia:Articles for deletion/Co-training
Tools
Actions
General
Print/export
In other projects
Appearance
From Wikipedia, the free encyclopedia
- The following discussion is an archived debate of the proposed deletion of the article below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.
The result was keep. –Juliancolton | Talk 00:32, 9 May 2009 (UTC)[reply]
- Co-training (edit | talk | history | protect | delete | links | watch | logs | views) (delete) – (View log)
No notability established, no independent sources, looks like just promotion for some term some people talked about at a conference. Certainly there would be other uses of the term that did not refer to these two people's theory. Just a junk article. DreamGuy (talk) 22:31, 1 May 2009 (UTC)[reply]
- Strong Delete - seriously fails WP:N and WP:RS. Sjones23 (talk - contributions) 23:06, 1 May 2009 (UTC)[reply]
- Delete Does not meet inclusiong guidelines. NO independent coverage in reliable sources. ChildofMidnight (talk) 04:43, 2 May 2009
(UTC)
- Speedy Keep Just a junk nomination which fails WP:BEFORE. It only takes a few seconds to find more sources for this, e.g. Co-training in Data Mining Colonel Warden (talk) 10:11, 2 May 2009 (UTC)[reply]
- You continue to have a really bizarre idea of what meets Wikipedia's rules on notability. The existence of some random refs to people using the term does not demonstrate that it's notable enough for a Wikipedia article. It's just a neologism without widespread usage anywhere. We aren't academicjargonopedia here. DreamGuy (talk) 14:50, 2 May 2009 (UTC)[reply]
- Keep. Neologism hardly applies to a scientific term that is 11 years old. Cited by 1312 in Google Scholar - that seems pretty widespread. It is discussed in tens of books:[1], e.g. "A paradigm, termed co-training, for learning with labeled and unlabeled data was proposed in Blum and Mitchell (1998)."[2]. It seems to be a seminal work in the data mining of web pages, so it is certainly notable. There are reliable sources outside the mainstream press. Fences and windows (talk) 00:43, 3 May 2009 (UTC)[reply]
- Non-jargon source. Search Engines Ready to Learn: "New "co-training" models that tap additional sources of information about a Web page are the third technique-"the thing I'm most excited about," says Mitchell. These proprietary algorithms can capture "unlabeled" data sets with very limited training by programmers. They do this by analyzing the hyperlinks that refer to a Web page and correlating the information contained in those links to the text on the page. "The features that describe a page are the words on the page and the links that point to that page," says Mitchell. "The co-training models utilize both classifiers to determine the likelihood that a page will contain data relevant to the search criteria." In a progressive search, the algorithm remembers link information as an implied correlation, and as the possible link correlations grow, they can help to confirm a hit or a miss. The search works both ways so that text information on the page can help determine the relevance of link classifiers-hence the term "co-training". Co-training models reduce hit error percentages by more than half, Mitchell claims. Other algorithms have a hit-accuracy of 86 percent, while co-trained models attain 96 percent accuracy, he says. WhizBang's online job site, FlipDog.com, launched last year as a demonstration of its data-mining technology. Since then it has signed up clients such as Dun & Bradstreet and the U.S. Department of Labor, for which it is compiling a directory of continuing and distance education opportunities." Fences and windows (talk) 00:49, 3 May 2009 (UTC)[reply]
- Rescue!. I've made an attempt at rescuing the article. I found some sources and tried to edit it to make it more accessible and obviously notable. I hope that the page is now in a state where it is clearer what the use of co-training is. It is not just an obscure bit of academia, but rather a technique that is influential and practically applicable. The original author of the page was obviously a computer scientist and didn't really give any "in" for the non-specialist (though I saw a blog post referring to this page saying it was a good definition).[3] I am worried that other pages like this are getting deleted instead of being rescued. Fences and windows (talk) 03:02, 3 May 2009 (UTC)[reply]
- Keep. I'm a worker in machine learning, and this is a very noteworthy topic in semi-supervised learning. The article's been cleaned up. And anyway, the label of academic jargon is best determined by knowledgeable academics. This knowledgeable academic says co-training is not mere jargon. Peter huggins (talk) Peter huggins (talk) 22:25, 4 May 2009 (UTC)[reply]
- Keep on the basis of all the new found references and improvement work by Fences and windows Holkingers (talk) 17:13, 6 May 2009 (UTC)[reply]
- Keep - Good work by Fences and windows. -- King of ♥ ♦ ♣ ♠ 00:09, 9 May 2009 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.