Are exchanged or reciprocal links okay with Google?
Etmagnis dis parturient montes, nascetur ridiculus mus. Donec lorem ipsum dolor sit amet, et consectetuer adipiscing elit. Aenean commodo ligula eget consyect etur dolor.

Contact Info

(+888)-123-4587

121 King St, Melbourne VIC 3000, Australia

info@example.com

Folow us on social

Google’s new technology helps create powerful ranking algorithms

Google's new technology helps create powerful ranking algorithms

Google has announced the release of enhanced technology that makes it easier and faster to research and develop new algorithms that can be implemented quickly.

This allows Google to quickly create new anti-spam algorithms, improved natural language processing and ranking of related algorithms and be able to get them into production faster than ever.

Improved TF ranking coincides with the dates of the latest Google updates

This is of interest because Google has rolled out several spam control algorithms and two core algorithm updates in June and July 2021. These developments followed directly after the release of this new technology in May 2021.

The timing may be random, but given all that the new version of Keras-based TF-Ranking does, it may be important to familiarize yourself with it to understand why Google has increased the pace of releasing new ranking-related algorithm updates.

New version of Keras-based TF-Ranking

Google announced a new version of TF-Ranking that can be used to improve neural learning to rank algorithms as well as natural language processing algorithms like BERT.

Advertising

Continue reading below

It is a powerful way to create new algorithms and amplify existing ones so to speak and do it in a way that is incredibly fast.

TensorFlow ranking

According to Google, TensorFlow is a platform for machine learning.

In a YouTube video from 2019, the first version of the TensorFlow Ranking was described as:

“The first open source deep learning library to learn to rank (LTR) in scale.”

The innovation in the original TF-Ranking platform was that it changed how relevant documents were ranked.

Previously relevant documents were compared with each other in what is called pairwise ranking. The probability that a document is relevant to a query was compared to the probability of another element.

This was a comparison between pairs of documents and not a comparison of the entire list.

The innovation in TF-Ranking is that it enabled comparison of the entire list of documents at a time, which is called multi-item scoring. This approach allows for better ranking decisions.

Advertising

Continue reading below

Improved TF ranking allows rapid development of powerful new algorithms

Google’s article published on their AI blog says that the new TF-Ranking is a major release that makes it easier than ever to set up learning to rank (LTR) models and get them into live production faster.

This means Google can create new algorithms and add them to search faster than ever.

The article says:

“Our native Keras ranking model has a whole new design of workflows, including a flexible ModelBuilder, a DatasetBuilder for setting up training data and a pipeline for training the model with the included data set.

These components make building a customized LTR model easier than ever and facilitate rapid exploration of new model structures for production and research. ”

TF-Ranking BERT

When an article or research paper says that the results were marginally better, makes reservations and says that more research was needed, it is an indication that the algorithm under discussion may not be in use because it is not clear or a dead end.

This is not the case with TFR-BERT, a combination of TF-Ranking and BERT.

BERT is a machine learning approach to the treatment of natural language. It’s a way to understand search queries and web page content.

BERT is one of the most important updates to Google and Bing in the last few years.

The article says that combining TF-R with BERT to optimize the order of list inputs generated “significant improvements.”

This statement that the results were significant is important because it increases the likelihood that such a thing is currently in use.

The implication is that Keras-based TF ranking made BERT more powerful.

According to Google:

“Our experience shows that this TFR-BERT architecture delivers significant improvements in pre-trained language model performance, leading to state-of-the-art performance for several popular ranking tasks …”

TF Ranking and GAMs

There is another kind of algorithm, called Generalized Additive Models (GAMs), which TF-Ranking also improves and makes an even more powerful version than the original.

One of the things that makes this algorithm important is that it is transparent because everything that goes into generating the ranking can be seen and understood.

Advertising

Continue reading below

Google explained the importance of transparency as follows:

“Transparency and interpretability are important factors in implementing LTR models in ranking systems that may be involved in determining the results of processes such as loan eligibility assessment, ad targeting or guidance in medical treatment decisions.

In such cases, the contribution of each characteristic to the final ranking should be examined and understood in order to ensure transparency, accountability and fairness in the results. ”

The problem with GAMs is that it was not known how to use this technology for ranking type problems.

To solve this problem and be able to use GAMs in a ranking, TF-Ranking was used to create neural rankings Generalized Additive Models (GAMs) that are more open to how web pages are ranked.

Google calls this Interpretable Learning-to-Rank.

Here’s what the Google AI article says:

“For this purpose, we have developed a neural ranking -GAM – an extension of generalized additive models for ranking problems.

Unlike standard GAMs, a neural ranking GAM can take into account both the functions of the ranked elements and the context functions (eg Query or user profile) to derive an interpretable, compact model.

For example, in the figure below, the use of a neural ranking GAM makes visible how distance, price and relevance associated with a given user unit contribute to the final location of the hotel.

Neural Ranking GAMs are now available as part of TF-Ranking … “

I asked Jeff Coyle, co-founder of AI Content Optimization Technology MarketMuse (@MarketMuseCo), on TF-Ranking and GAMs.

Advertising

Continue reading below

Jeffrey, who has a computer science background as well as decades of experience in search marketing, noted that GAMs are an important technology and improving it was an important event.

Coyle shared:

I have spent considerable time researching the neural ranking of GAM’s innovation and the possible impact on contextual analysis (for queries), which has been a long-term goal for Google’s scoring team.

Neural RankGAM and related technologies are deadly weapons for personalization (especially user data and contextual information, such as location) and for intentional analysis.

With keras_dnn_tfrecord.py available as a public example, we get a glimpse of innovation at a basic level.

I recommend that everyone check that code. ”

Outperforming Gradient Boosted Decision Trees (BTDT)

Beating the standard in an algorithm is important because it means that the new approach is a performance that improves the quality of search results.

In this case, the standard is gradient-enhanced decision trees (GBDTs), a machine learning technique that has several advantages.

Advertising

Continue reading below

But Google also explains that GBDTs also have drawbacks:

“GBDTs cannot be applied directly to large discrete function spaces, such as raw document text. They are also generally less scalable than neural ranking models.”

In a research article entitled Are Neural Rankings Still Better Than Gradient-Improved Decision Trees? the researchers state that neural learning to rank models is “by a large margin inferior” than … wood-based implementations. “

Google researchers used the new Keras-based TF-Ranking to produce what they called the Data Augmented Self-Attentive Latent Cross (DASALC) model.

DASALC is important because it is able to match or exceed the current state-of-the-art baseline:

“Our models are able to perform relatively with the strong wood-based baseline, while surpassing newly released neural learning to rank methods by a large margin. Our results also serve as a benchmark for neural learning to rank models. ”

Keras-based TF-Ranking Speeds Development of ranking algorithms

The important pickup is that this new system speeds up research and development of new ranking systems, which includes identifying spam to rank them out of the search results.

Advertising

Continue reading below

The article concludes:

“All in all, we believe the new Keras-based TF-Ranking version will make it easier to conduct neural LTR surveys and implement production-ranked systems.”

Google has been innovating at an ever faster pace over the last few months with more spam algorithm updates and two core algorithm updates over the course of two months.

These new technologies may be the reason why Google has rolled out so many new algorithms to improve spam control and rank sites in general.

Quotes

Google AI Blog Article
Advances in TF-Ranking

Google’s new DASALC algorithm
Are neural rankings still doing better than gradient-enhanced decision trees?

Official TensorFlow website

TensorFlow Ranking v0.4.0 GitHub Page
https://github.com/tensorflow/ranking/releases/tag/v0.4.0

Keras Example keras_dnn_tfrecord.py

    Leave Your Comment

    Your email address will not be published.*