HTML Tags Working
HTML Tags Working
<!DOCTYPE html>: Defines the document type and version of HTML (e.g., HTML5).
<html>: Root element of an HTML document.
<head>: Contains meta-information about the document (e.g., title, character set).
<title>: Sets the title of the document that appears in the browser tab.
<body>: Contains the visible content of the page.
<h1> to <h6>: Heading tags, where <h1> is the largest and <h6> the smallest.
<p>: Paragraph tag for text blocks.
<br>: Line break (no closing tag needed).
<strong>: Bold text.
<em>: Italic text.
<span>: Inline container for text styling.
4. List Tags
5. Table Tags
6. Form Tags
Example Structure:
<!DOCTYPE html>
<html>
<head>
SEMANTIC WEB ARCHITECTURE
<title>My Web Page</title>
</head>
<body>
<h1>Welcome to My Web Page</h1>
<p>This is a paragraph with <strong>bold</strong> and <em>italic</em> text.</p>
<ul>
<li>Item 1</li>
<li>Item 2</li>
</ul>
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>Row 1 Data 1</td>
<td>Row 1 Data 2</td>
</tr>
</table>
</body>
</html>
The Semantic Web is a layered architecture designed to make web content more accessible
and interpretable by machines, enabling better interaction between humans and computers.
Here’s an overview of its architecture, which is commonly represented in a stack:
XML: A standard format for structuring data. It provides syntax but doesn't assign
meaning to data. XML ensures that data can be transported and stored in a machine-
readable format.
RDFS: Provides a basic vocabulary for describing properties and classes of RDF
resources. It helps define the structure of RDF data by describing relationships
between different concepts (like defining a "Person" class and its properties).
SPARQL: A query language designed to retrieve and manipulate data stored in RDF
format. SPARQL enables complex queries across multiple datasets, akin to SQL for
relational databases.
Proof: Describes how systems can prove the validity of information and inferences.
Trust: Establishes the reliability of information, incorporating mechanisms like
digital signatures to ensure data integrity and trustworthiness.
This layer involves human interaction with the Semantic Web. Applications built on
top of the previous layers allow users to query and interact with the web more
effectively, benefiting from machine-readable data.
Semantic Web's Goal
The goal of the Semantic Web is to make the web more intelligent, allowing machines to
interpret, understand, and respond to human requests based on the meaning (semantics) of the
data, rather than just syntactical matching.
This architecture fosters interoperability, better data sharing, and automation, supporting
future web advancements like AI, data mining, and more sophisticated search engines.
SEARCH ENGINES
Search engines are tools that help users find information on the internet by indexing and
retrieving relevant web pages based on the user's query. They use sophisticated algorithms to
rank the results based on relevance, content quality, and other factors. The most popular
search engines include:
1. Google
Dominant player in the market with the largest share of users worldwide.
Key features: Personalized search, Knowledge Graph, Google Maps integration, and
sophisticated ranking algorithms.
Specialized services: Image search, video search (through YouTube), news search,
and more.
2. Bing
3. Yahoo
Historically a major search engine, though its search results are powered by Bing
today.
Key features: Email, news, and media content alongside search capabilities.
4. DuckDuckGo
Privacy-focused search engine that does not track user activity or store personal data.
Key features: Clean, simple interface and focus on privacy.
Specialized services: Instant answers, encrypted search, and !bang shortcuts for
specific sites.
5. Baidu
6. Yandex
Russia’s most popular search engine with a significant presence in Eastern Europe.
Key features: Localization for Russian and neighboring regions, deep integration
with Russian services.
7. Ecosia
8. Startpage
1. Crawling: Search engines send out bots or spiders to scan the internet and find new
or updated content.
2. Indexing: After discovering content, they store it in an index, which is a vast database
of web pages.
3. Ranking: When a user submits a query, the search engine retrieves relevant pages and
ranks them based on factors like keywords, page quality, user engagement, and more.
4. Serving results: The results are displayed in an ordered list, often with snippets of
content to help users decide which link to click.
To improve the visibility of websites in search engine results, webmasters use SEO
techniques, which focus on keywords, quality content, backlinks, and other factors that search
engines prioritize when ranking pages.
The article "The Semantic Web" from Scientific American (May 2001), authored by Tim
Berners-Lee, James Hendler, and Ora Lassila, discusses a revolutionary concept for the future
of the internet. The Semantic Web envisions a system where data on the web is structured in
a way that is both understandable and usable by machines. This would enable computers to
process and manipulate web content more intelligently and automatically, leading to a new
era of connectivity and efficiency online.
The Semantic Web relies on technologies like Resource Description Framework (RDF) and
ontologies to encode data with precise meaning, enabling better automation, integration, and
interaction across various platforms. The article emphasizes that this new web would open
the door to significant advancements in fields like artificial intelligence and knowledge
management.