使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作详解

今天为大家介绍下Python爬虫库BeautifulSoup遍历文档树并对标签进行操作的详细方法与函数

下面就是使用Python爬虫库BeautifulSoup对文档树进行遍历并对标签进行操作的实例,都是最基础的内容

 html_doc = """ The Dormouse's story

The Dormouse's story

Once upon a time there were three little sisters; and their names were Elsie, Lacie and Tillie; and they lived at the bottom of a well.

...

""" from bs4 import BeautifulSoup soup = BeautifulSoup(html_doc,'lxml')

一、子节点

一个Tag可能包含多个字符串或者其他Tag,这些都是这个Tag的子节点.BeautifulSoup提供了许多操作和遍历子结点的属性。

1.通过Tag的名字来获得Tag

 print(soup.head) print(soup.title)
 The Dormouse's storyThe Dormouse's story

通过名字的方法只能获得第一个Tag,如果要获得所有的某种Tag可以使用find_all方法

 soup.find_all('a')

2.contents属性:将Tag的子节点通过列表的方式返回

 head_tag = soup.head head_tag.contents
 [The Dormouse's story]
 title_tag = head_tag.contents[0] title_tag
 The Dormouse's story
 title_tag.contents
 ["The Dormouse's story"]

3.children:通过该属性对子节点进行循环

 for child in title_tag.children: print(child)
 The Dormouse's story 

4.descendants: 不论是contents还是children都是返回直接子节点,而descendants对所有tag的子孙节点进行递归循环

 for child in head_tag.children: print(child)
 The Dormouse's story
 for child in head_tag.descendants: print(child)
 The Dormouse's story The Dormouse's story

5.string 如果tag只有一个NavigableString类型的子节点,那么tag可以使用.string得到该子节点

 title_tag.string
 "The Dormouse's story" 

如果一个tag只有一个子节点,那么使用.string可以获得其唯一子结点的NavigableString.

 head_tag.string
 "The Dormouse's story" 

如果tag有多个子节点,tag无法确定.string对应的是那个子结点的内容,故返回None

 print(soup.html.string)
 None

6.strings和stripped_strings

如果tag包含多个字符串,可以使用.strings循环获取

 for string in soup.strings: print(string)
 The Dormouse's story The Dormouse's story Once upon a time there were three little sisters; and their names were Elsie , Lacie and Tillie ; and they lived at the bottom of a well. ...

.string输出的内容包含了许多空格和空行,使用strpped_strings去除这些空白内容

 for string in soup.stripped_strings: print(string)
 The Dormouse's story The Dormouse's story Once upon a time there were three little sisters; and their names were Elsie , Lacie and Tillie ; and they lived at the bottom of a well. ...

二、父节点

1.parent:获得某个元素的父节点

 title_tag = soup.title title_tag.parent
 The Dormouse's story

字符串也有父节点

 title_tag.string.parent
 The Dormouse's story

2.parents:递归的获得所有父辈节点

 link = soup.a for parent in link.parents: if parent is None: print(parent) else: print(parent.name)
 p body html [document]

三、兄弟结点

 sibling_soup = BeautifulSoup("text1text2",'lxml') print(sibling_soup.prettify())

1.next_sibling和previous_sibling

 sibling_soup.b.next_sibling
 text2
 sibling_soup.c.previous_sibling
 text1

在实际文档中.next_sibling和previous_sibling通常是字符串或者空白符

 soup.find_all('a')
 soup.a.next_sibling # 第一个的next_sibling是,\n
 ',\n' 
 soup.a.next_sibling.next_sibling

2.next_siblings和previous_siblings

 for sibling in soup.a.next_siblings: print(repr(sibling))
 ',\n' Lacie ' and\n' Tillie ';\nand they lived at the bottom of a well.'
 for sibling in soup.find(id="link3").previous_siblings: print(repr(sibling))
 ' and\n' Lacie ',\n' Elsie 'Once upon a time there were three little sisters; and their names were\n'

四、回退与前进

1.next_element和previous_element

指向下一个或者前一个被解析的对象(字符串或tag),即深度优先遍历的后序节点和前序节点

 last_a_tag = soup.find("a", id="link3") print(last_a_tag.next_sibling) print(last_a_tag.next_element)
 ; and they lived at the bottom of a well. Tillie
 last_a_tag.previous_element
 ' and\n' 

2.next_elements和previous_elements

通过.next_elements和previous_elements可以向前或向后访问文档的解析内容,就好像文档正在被解析一样

 for element in last_a_tag.next_elements: print(repr(element))
 'Tillie' ';\nand they lived at the bottom of a well.' '\n' 

...

'...' '\n'

更多关于使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作的方法与文章大家可以点击下面的相关文章

以上就是使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作详解的详细内容,更多请关注0133技术站其它相关文章!

赞(0) 打赏
未经允许不得转载:0133技术站首页 » python