欧美一区二区三区老妇人-欧美做爰猛烈大尺度电-99久久夜色精品国产亚洲a-亚洲福利视频一区二区

網(wǎng)絡(luò)爬蟲java代碼,網(wǎng)絡(luò)爬蟲編寫

用java編寫網(wǎng)絡(luò)爬蟲,用來爬網(wǎng)絡(luò)音樂資源,再返回java頁面顯示該怎么實(shí)現(xiàn)

下面是源代碼,希望可以幫到你~~

臨汾網(wǎng)站建設(shè)公司創(chuàng)新互聯(lián),臨汾網(wǎng)站設(shè)計(jì)制作,有大型網(wǎng)站制作公司豐富經(jīng)驗(yàn)。已為臨汾數(shù)千家提供企業(yè)網(wǎng)站建設(shè)服務(wù)。企業(yè)網(wǎng)站搭建\成都外貿(mào)網(wǎng)站制作要多少錢,請找那個(gè)售后服務(wù)好的臨汾做網(wǎng)站的公司定做!

package com.ly.mainprocess;

import java.io.BufferedReader;

import java.io.InputStreamReader;

import java.util.ArrayList;

import java.util.List;

import org.apache.http.Consts;

import org.apache.http.Header;

import org.apache.http.HttpEntity;

import org.apache.http.HttpResponse;

import org.apache.http.NameValuePair;

import org.apache.http.StatusLine;

import org.apache.http.client.entity.UrlEncodedFormEntity;

import org.apache.http.client.methods.HttpGet;

import org.apache.http.client.methods.HttpPost;

import org.apache.http.cookie.Cookie;

import org.apache.http.impl.client.DefaultHttpClient;

import org.apache.http.message.BasicNameValuePair;

import org.apache.http.util.EntityUtils;

public class Test1 {

public static void main(String[] args){

Test1 test1 = new Test1();

System.out.println(test1.process("******","******"));

}

@SuppressWarnings("deprecation")

public boolean process(String username,String password) {

boolean ret=false;

DefaultHttpClient httpclient = new DefaultHttpClient();

try {

HttpGet httpget;

HttpResponse response;

HttpEntity entity;

ListCookie cookies;

//組建登錄的post包

HttpPost httppost = new HttpPost(""); // 用戶登錄

ListNameValuePair nvps = new ArrayListNameValuePair();

nvps.add(new BasicNameValuePair("nickname", username));

nvps.add(new BasicNameValuePair("password", password));

nvps.add(new BasicNameValuePair("origURL", ""));

nvps.add(new BasicNameValuePair("loginregFrom", "index"));

nvps.add(new BasicNameValuePair("ss", "10101"));

httppost.setEntity(new UrlEncodedFormEntity(nvps, Consts.UTF_8));

httppost.addHeader("Referer", "");

httppost.addHeader("Connection", "keep-alive");

httppost.addHeader("Content-Type", "application/x-www-form-urlencoded");

httppost.addHeader("Accept-Language", "zh-CN,zh;q=0.8");

httppost.addHeader("Origin", "");

httppost.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36");

response = httpclient.execute(httppost);

entity = response.getEntity();

// System.out.println("Login form get: " + response.getStatusLine());

EntityUtils.consume(entity);

// System.out.println("Post logon cookies:");

cookies = httpclient.getCookieStore().getCookies();

if (cookies.isEmpty()) {

// System.out.println("None");

} else {

for (int i = 0; i cookies.size(); i++) {

// System.out.println("- " + cookies.get(i).toString());

}

}

//進(jìn)行頁面跳轉(zhuǎn)

String url = ""; // 頁面跳轉(zhuǎn)

Header locationHeader = response.getFirstHeader("Location");

// System.out.println(locationHeader.getValue());

if (locationHeader != null) {

url = locationHeader.getValue(); // 得到跳轉(zhuǎn)href

HttpGet httpget1 = new HttpGet(url);

response = httpclient.execute(httpget1);

// 登陸成功。。。hoho

}

entity = response.getEntity();

// System.out.println(response.getStatusLine());

if (entity != null) {

// System.out.println("Response content length: " + entity.getContentLength());

}

// 顯示結(jié)果

BufferedReader reader = new BufferedReader(new InputStreamReader(entity.getContent(), "UTF-8"));

String line = null;

while ((line = reader.readLine()) != null) {

// System.out.println(line);

}

//自動(dòng)打卡

// 訪問網(wǎng)站的子網(wǎng)頁。

HttpPost httppost1 = new HttpPost(""); // 設(shè)置個(gè)人信息頁面

httppost1.addHeader("Content-Type", "text/plain;charset=UTF-8");

httppost1.addHeader("Accept", "text/plain, */*");

httppost1.addHeader("X-Requested-With", "XMLHttpRequest");

httppost1.addHeader("Referer", "");

response = httpclient.execute(httppost1);

entity = response.getEntity();

// System.out.println(response.getStatusLine());

if(response.getStatusLine().toString().indexOf("HTTP/1.1 200 OK")=0){

ret = true;

}

if (entity != null) {

// System.out.println("Response content length: " + entity.getContentLength());

}

// 顯示結(jié)果

reader = new BufferedReader(new InputStreamReader(entity.getContent(), "UTF-8"));

line = null;

while ((line = reader.readLine()) != null) {

System.out.println(line);

}

} catch (Exception e) {

} finally {

httpclient.getConnectionManager().shutdown();

}

return ret;

}

}

求用JAVA編寫一個(gè)網(wǎng)絡(luò)爬蟲的程序

先根據(jù)一個(gè)鏈接,抓取該頁面;解析該頁面,搜取出該頁面中有用的鏈接,根據(jù)鏈接地址循環(huán)抓取就OK了;--簡單爬行器原理 抓取完文件后還需要一個(gè)功能好點(diǎn)的文檔解析器來解析出文件中的內(nèi)容;--文件解析器 再根據(jù)關(guān)鍵字(分詞器)處理這些頁面,建立自己的搜索引擎;--分詞器不好解決 我也正在搞這方面 呵呵

Java網(wǎng)絡(luò)爬蟲怎么實(shí)現(xiàn)?

網(wǎng)絡(luò)爬蟲是一個(gè)自動(dòng)提取網(wǎng)頁的程序,它為搜索引擎從萬維網(wǎng)上下載網(wǎng)頁,是搜索引擎的重要組成。\x0d\x0a傳統(tǒng)爬蟲從一個(gè)或若干初始網(wǎng)頁的URL開始,獲得初始網(wǎng)頁上的URL,在抓取網(wǎng)頁的過程中,不斷從當(dāng)前頁面上抽取新的URL放入隊(duì)列,直到滿足系統(tǒng)的一定停止條件。對于垂直搜索來說,聚焦爬蟲,即有針對性地爬取特定主題網(wǎng)頁的爬蟲,更為適合。\x0d\x0a\x0d\x0a以下是一個(gè)使用java實(shí)現(xiàn)的簡單爬蟲核心代碼:\x0d\x0apublic void crawl() throws Throwable { \x0d\x0a while (continueCrawling()) { \x0d\x0a CrawlerUrl url = getNextUrl(); //獲取待爬取隊(duì)列中的下一個(gè)URL \x0d\x0a if (url != null) { \x0d\x0a printCrawlInfo(); \x0d\x0a String content = getContent(url); //獲取URL的文本信息 \x0d\x0a \x0d\x0a //聚焦爬蟲只爬取與主題內(nèi)容相關(guān)的網(wǎng)頁,這里采用正則匹配簡單處理 \x0d\x0a if (isContentRelevant(content, this.regexpSearchPattern)) { \x0d\x0a saveContent(url, content); //保存網(wǎng)頁至本地 \x0d\x0a \x0d\x0a //獲取網(wǎng)頁內(nèi)容中的鏈接,并放入待爬取隊(duì)列中 \x0d\x0a Collection urlStrings = extractUrls(content, url); \x0d\x0a addUrlsToUrlQueue(url, urlStrings); \x0d\x0a } else { \x0d\x0a System.out.println(url + " is not relevant ignoring ..."); \x0d\x0a } \x0d\x0a \x0d\x0a //延時(shí)防止被對方屏蔽 \x0d\x0a Thread.sleep(this.delayBetweenUrls); \x0d\x0a } \x0d\x0a } \x0d\x0a closeOutputStream(); \x0d\x0a}\x0d\x0aprivate CrawlerUrl getNextUrl() throws Throwable { \x0d\x0a CrawlerUrl nextUrl = null; \x0d\x0a while ((nextUrl == null) (!urlQueue.isEmpty())) { \x0d\x0a CrawlerUrl crawlerUrl = this.urlQueue.remove(); \x0d\x0a //doWeHavePermissionToVisit:是否有權(quán)限訪問該URL,友好的爬蟲會(huì)根據(jù)網(wǎng)站提供的"Robot.txt"中配置的規(guī)則進(jìn)行爬取 \x0d\x0a //isUrlAlreadyVisited:URL是否訪問過,大型的搜索引擎往往采用BloomFilter進(jìn)行排重,這里簡單使用HashMap \x0d\x0a //isDepthAcceptable:是否達(dá)到指定的深度上限。爬蟲一般采取廣度優(yōu)先的方式。一些網(wǎng)站會(huì)構(gòu)建爬蟲陷阱(自動(dòng)生成一些無效鏈接使爬蟲陷入死循環(huán)),采用深度限制加以避免 \x0d\x0a if (doWeHavePermissionToVisit(crawlerUrl) \x0d\x0a (!isUrlAlreadyVisited(crawlerUrl)) \x0d\x0a isDepthAcceptable(crawlerUrl)) { \x0d\x0a nextUrl = crawlerUrl; \x0d\x0a // System.out.println("Next url to be visited is " + nextUrl); \x0d\x0a } \x0d\x0a } \x0d\x0a return nextUrl; \x0d\x0a}\x0d\x0aprivate String getContent(CrawlerUrl url) throws Throwable { \x0d\x0a //HttpClient4.1的調(diào)用與之前的方式不同 \x0d\x0a HttpClient client = new DefaultHttpClient(); \x0d\x0a HttpGet httpGet = new HttpGet(url.getUrlString()); \x0d\x0a StringBuffer strBuf = new StringBuffer(); \x0d\x0a HttpResponse response = client.execute(httpGet); \x0d\x0a if (HttpStatus.SC_OK == response.getStatusLine().getStatusCode()) { \x0d\x0a HttpEntity entity = response.getEntity(); \x0d\x0a if (entity != null) { \x0d\x0a BufferedReader reader = new BufferedReader( \x0d\x0a new InputStreamReader(entity.getContent(), "UTF-8")); \x0d\x0a String line = null; \x0d\x0a if (entity.getContentLength() 0) { \x0d\x0a strBuf = new StringBuffer((int) entity.getContentLength()); \x0d\x0a while ((line = reader.readLine()) != null) { \x0d\x0a strBuf.append(line); \x0d\x0a } \x0d\x0a } \x0d\x0a } \x0d\x0a if (entity != null) { \x0d\x0a nsumeContent(); \x0d\x0a } \x0d\x0a } \x0d\x0a //將url標(biāo)記為已訪問 \x0d\x0a markUrlAsVisited(url); \x0d\x0a return strBuf.toString(); \x0d\x0a}\x0d\x0apublic static boolean isContentRelevant(String content, \x0d\x0aPattern regexpPattern) { \x0d\x0a boolean retValue = false; \x0d\x0a if (content != null) { \x0d\x0a //是否符合正則表達(dá)式的條件 \x0d\x0a Matcher m = regexpPattern.matcher(content.toLowerCase()); \x0d\x0a retValue = m.find(); \x0d\x0a } \x0d\x0a return retValue; \x0d\x0a}\x0d\x0apublic List extractUrls(String text, CrawlerUrl crawlerUrl) { \x0d\x0a Map urlMap = new HashMap(); \x0d\x0a extractHttpUrls(urlMap, text); \x0d\x0a extractRelativeUrls(urlMap, text, crawlerUrl); \x0d\x0a return new ArrayList(urlMap.keySet()); \x0d\x0a} \x0d\x0aprivate void extractHttpUrls(Map urlMap, String text) { \x0d\x0a Matcher m = (text); \x0d\x0a while (m.find()) { \x0d\x0a String url = m.group(); \x0d\x0a String[] terms = url.split("a href=\""); \x0d\x0a for (String term : terms) { \x0d\x0a // System.out.println("Term = " + term); \x0d\x0a if (term.startsWith("http")) { \x0d\x0a int index = term.indexOf("\""); \x0d\x0a if (index 0) { \x0d\x0a term = term.substring(0, index); \x0d\x0a } \x0d\x0a urlMap.put(term, term); \x0d\x0a System.out.println("Hyperlink: " + term); \x0d\x0a } \x0d\x0a } \x0d\x0a } \x0d\x0a} \x0d\x0aprivate void extractRelativeUrls(Map urlMap, String text, \x0d\x0a CrawlerUrl crawlerUrl) { \x0d\x0a Matcher m = relativeRegexp.matcher(text); \x0d\x0a URL textURL = crawlerUrl.getURL(); \x0d\x0a String host = textURL.getHost(); \x0d\x0a while (m.find()) { \x0d\x0a String url = m.group(); \x0d\x0a String[] terms = url.split("a href=\""); \x0d\x0a for (String term : terms) { \x0d\x0a if (term.startsWith("/")) { \x0d\x0a int index = term.indexOf("\""); \x0d\x0a if (index 0) { \x0d\x0a term = term.substring(0, index); \x0d\x0a } \x0d\x0a String s = //" + host + term; \x0d\x0a urlMap.put(s, s); \x0d\x0a System.out.println("Relative url: " + s); \x0d\x0a } \x0d\x0a } \x0d\x0a } \x0d\x0a \x0d\x0a}\x0d\x0apublic static void main(String[] args) { \x0d\x0a try { \x0d\x0a String url = ""; \x0d\x0a Queue urlQueue = new LinkedList(); \x0d\x0a String regexp = "java"; \x0d\x0a urlQueue.add(new CrawlerUrl(url, 0)); \x0d\x0a NaiveCrawler crawler = new NaiveCrawler(urlQueue, 100, 5, 1000L, \x0d\x0a regexp); \x0d\x0a // boolean allowCrawl = crawler.areWeAllowedToVisit(url); \x0d\x0a // System.out.println("Allowed to crawl: " + url + " " + \x0d\x0a // allowCrawl); \x0d\x0a crawler.crawl(); \x0d\x0a } catch (Throwable t) { \x0d\x0a System.out.println(t.toString()); \x0d\x0a t.printStackTrace(); \x0d\x0a } \x0d\x0a}

java爬蟲抓取指定數(shù)據(jù)

根據(jù)java網(wǎng)絡(luò)編程相關(guān)的內(nèi)容,使用jdk提供的相關(guān)類可以得到url對應(yīng)網(wǎng)頁的html頁面代碼。

針對得到的html代碼,通過使用正則表達(dá)式即可得到我們想要的內(nèi)容。

比如,我們?nèi)绻氲玫揭粋€(gè)網(wǎng)頁上所有包括“java”關(guān)鍵字的文本內(nèi)容,就可以逐行對網(wǎng)頁代碼進(jìn)行正則表達(dá)式的匹配。最后達(dá)到去除html標(biāo)簽和不相關(guān)的內(nèi)容,只得到包括“java”這個(gè)關(guān)鍵字的內(nèi)容的效果。

從網(wǎng)頁上爬取圖片的流程和爬取內(nèi)容的流程基本相同,但是爬取圖片的步驟會(huì)多一步。

需要先用img標(biāo)簽的正則表達(dá)式匹配獲取到img標(biāo)簽,再用src屬性的正則表達(dá)式獲取這個(gè)img標(biāo)簽中的src屬性的圖片url,然后再通過緩沖輸入流對象讀取到這個(gè)圖片url的圖片信息,配合文件輸出流將讀到的圖片信息寫入到本地即可。

Java源碼 實(shí)現(xiàn)網(wǎng)絡(luò)爬蟲?

//Java爬蟲demo

import?java.io.File;

import?java.net.URL;

import?java.net.URLConnection;

import?java.nio.file.Files;

import?java.nio.file.Paths;

import?java.util.Scanner;

import?java.util.UUID;

import?java.util.regex.Matcher;

import?java.util.regex.Pattern;

public?class?DownMM?{

public?static?void?main(String[]?args)?throws?Exception?{

//out為輸出的路徑,注意要以\\結(jié)尾

String?out?=?"D:\\JSP\\pic\\java\\";?

try{

File?f?=?new?File(out);

if(!?f.exists())?{??

f.mkdirs();??

}??

}catch(Exception?e){

System.out.println("no");

}

String?url?=?"-";

Pattern?reg?=?Pattern.compile("img?src=\"(.*?)\"");

for(int?j=0,?i=1;?i=10;?i++){

URL?uu?=?new?URL(url+i);

URLConnection?conn?=?uu.openConnection();

conn.setRequestProperty("User-Agent",?"Mozilla/5.0?(Windows?NT?6.3;?WOW64;?Trident/7.0;?rv:11.0)?like?Gecko");

Scanner?sc?=?new?Scanner(conn.getInputStream());

Matcher?m?=?reg.matcher(sc.useDelimiter("\\A").next());

while(m.find()){

Files.copy(new?URL(m.group(1)).openStream(),?Paths.get(out?+?UUID.randomUUID()?+?".jpg"));

System.out.println("已下載:"+j++);

}

}

}

}

網(wǎng)頁名稱:網(wǎng)絡(luò)爬蟲java代碼,網(wǎng)絡(luò)爬蟲編寫
瀏覽路徑:http://chinadenli.net/article42/hecjhc.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供網(wǎng)站導(dǎo)航網(wǎng)頁設(shè)計(jì)公司網(wǎng)站內(nèi)鏈網(wǎng)站收錄關(guān)鍵詞優(yōu)化網(wǎng)站建設(shè)

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

外貿(mào)網(wǎng)站制作