Information Gathering & WebSite ReConnaissance.
-
Updated
Feb 8, 2018 - Shell
Information Gathering & WebSite ReConnaissance.
Collecting urls for a specified product(urls related to sales) based on a number of Product features and categories.
A Repository that contains OOPs concepts of Python
A small tool for extracting all urls from a blob of binary data (ex. PDFs).
usable-url is a minuscule suite that resolves potentially unusable urls.
This is a simple LogisticRegression model built to classify SPAM emails. The model uses count of words present in the emails to classify them as SPAM or HAM
The gmailSaver can download emails from a gmail account by using Gmail API and extract all url links in them.
Library for extracting links from any kind of documents
URL Extractor is a simple Python code designed to extract the domain name from a list of URLs stored in a text file. This application provides a convenient way to extract and process URLs efficiently.
Easy way to extract 1fichier URL from MultiUp URLs.
LinkLifter is a Python script that searches for URLs in a given text file or recursively in a directory and its subdirectories. The found URLs, along with the file they are located in, are saved to a CSV file.
Extract urls from your a file or web address
This Python script extracts all unique URLs from a given webpage and outputs them to the console or to a file. It uses the requests module to send a GET request to the specified URL and the BeautifulSoup module to parse the HTML content of the response.
Add a description, image, and links to the urlextractor topic page so that developers can more easily learn about it.
To associate your repository with the urlextractor topic, visit your repo's landing page and select "manage topics."