Movatterモバイル変換


[0]ホーム

URL:


Packt
Search iconClose icon
Search icon CANCEL
Subscription
0
Cart icon
Your Cart(0 item)
Close icon
You have no products in your basket yet
Save more on your purchases!discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Profile icon
Account
Close icon

Change country

Modal Close icon
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timerSALE ENDS IN
0Days
:
00Hours
:
00Minutes
:
00Seconds
Home> Data> Machine Learning> Python Natural Language Processing Cookbook
Python Natural Language Processing Cookbook
Python Natural Language Processing Cookbook

Python Natural Language Processing Cookbook: Over 60 recipes for building powerful NLP solutions using Python and LLM libraries , Second Edition

Arrow left icon
Profile Icon Zhenya AntićProfile Icon Saurabh Chakravarty
Arrow right icon
€23.99€26.99
Full star iconFull star iconFull star iconFull star iconFull star icon5(5 Ratings)
eBookSep 2024312 pages2nd Edition
eBook
€23.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Zhenya AntićProfile Icon Saurabh Chakravarty
Arrow right icon
€23.99€26.99
Full star iconFull star iconFull star iconFull star iconFull star icon5(5 Ratings)
eBookSep 2024312 pages2nd Edition
eBook
€23.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€23.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature iconInstant access to your Digital eBook purchase
Product feature icon Download this book inEPUB andPDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature iconDRM FREE - Read whenever, wherever and however you want
Product feature iconAI Assistant (beta) to help accelerate your learning
OR

Contact Details

Modal Close icon
Payment Processing...
tickCompleted

Billing Address

Table of content iconView table of contentsPreview book icon Preview Book

Python Natural Language Processing Cookbook

Playing with Grammar

Grammar is one of the main building blocks of language. Each human language, and programming language for that matter, has a set of rules that every person speaking it must follow, otherwise risking not being understood. These grammatical rules can be uncovered using NLP and are useful for extracting data from sentences. For example, using information about the grammatical structure of text, we can parse out subjects, objects, and relations betweendifferent entities.

In this chapter, you will learn how to use different packages to reveal the grammatical structure of words and sentences, as well as extract certain parts of sentences. These are the topics covered inthis chapter:

  • Counting nouns – plural andsingular nouns
  • Getting thedependency parse
  • Extractingnoun chunks
  • Extracting the subjects and objects ofthe sentence
  • Finding patterns in text usinggrammatical information

Technical requirements

Please follow the installation requirements given inChapter 1 to run the notebooks inthis chapter.

Counting nouns – plural and singular nouns

In this recipe, we will do two things: determine whether a noun is plural or singular and turn plural nouns into singular, andvice versa.

You might need these two things for a variety of tasks. For example, you might want to count the word statistics, and for that, you most likely need to count the singular and plural nouns together. In order to count the plural nouns together with singular ones, you need a way to recognize that a word is pluralor singular.

Getting ready

To determine whether a noun is singular or plural, we will usespaCy via two different methods: by looking at thedifference between the lemma and the actual word and by looking at themorph attribute. To inflect these nouns, or turn singular nouns into plural or vice versa we will use thetextblob package. We will also see how to determine the noun’s number using GPT-3 through the OpenAI API. The code for this section is locatedathttps://github.com/PacktPublishing/Python-Natural-Language-Processing-Cookbook-Second-Edition/tree/main/Chapter02.

How to do it…

We will first usespaCy’s lemma information to infer whether a noun is singular or plural. Then, we will use themorph attribute ofToken objects. We will then create a function that uses one of those methods. Finally, we will use GPT-3.5 to find out the numberof nouns:

  1. Run the code in the file and languageutility notebooks. If you run into an error saying that the small or large models do not exist, you need to open thelang_utils.ipynb file, uncomment, and run the statement that downloadsthe model:
    %run -i "../util/file_utils.ipynb"%run -i "../util/lang_utils.ipynb"
  2. Initialize thetext variable and process it using thespaCy small model to get the resultingDoc object:
    text = "I have five birds"doc = small_model(text)
  3. In this step, we loop through theDoc object. For each token in the object, we check whether it’s a noun and whether the lemma is the same as the word itself. Since the lemma is the basic form of the word, if the lemma is different from the word, that tokenis plural:
    for token in doc:    if (token.pos_ == "NOUN" and token.lemma_ != token.text):        print(token.text, "plural")

    The result should beas follows:

    birds plural
  4. Now, we will check the number of a noun using a different method: themorph features of aToken object. Themorph features are the morphological features of a word, such as number, case, and so on. Since we know that token3 is a noun, we directly access themorph features and get theNumber to get the same resultas previously:
    doc = small_model("I have five birds.")print(doc[3].morph.get("Number"))

    Here isthe result:

    ['Plur']
  5. In this step, we prepare to define a function that returns a tuple,(noun, number). In order to better encode the noun number, we use anEnum class that assigns numbers to different values. We assign1 to singular and2 to plural. Once we create the class, we can directly refer to the noun number variables asNoun_number.SINGULARandNoun_number.PLURAL:
    class Noun_number(Enum):    SINGULAR = 1    PLURAL = 2
  6. In this step, we define the function. It takes as input the text, thespaCy model, and the method of determining the noun number. The two methods arelemma andmorph, the same two methods we used insteps 3 and4, respectively. The function outputs a list of tuples, each of the format(<noun text>, <noun number>), where the noun number is expressed using theNoun_number class defined instep 5:
    def get_nouns_number(text, model, method="lemma"):    nouns = []    doc = model(text)    for token in doc:        if (token.pos_ == "NOUN"):            if method == "lemma":                if token.lemma_ != token.text:                    nouns.append((token.text,                         Noun_number.PLURAL))                else:                    nouns.append((token.text,                        Noun_number.SINGULAR))            elif method == "morph":                if token.morph.get("Number") == "Sing":                    nouns.append((token.text,                        Noun_number.PLURAL))                else:                    nouns.append((token.text,                        Noun_number.SINGULAR))    return nouns
  7. We can use the preceding function and see its performance with differentspaCy models. In this step, we usethe smallspaCy model with the function we just defined. Usingboth methods, we see that thespaCy model gets the number of the irregular noungeese incorrectly:
    text = "Three geese crossed the road"nouns = get_nouns_number(text, small_model, "morph")print(nouns)nouns = get_nouns_number(text, small_model)print(nouns)

    The result should beas follows:

    [('geese', <Noun_number.SINGULAR: 1>), ('road', <Noun_number.SINGULAR: 1>)][('geese', <Noun_number.SINGULAR: 1>), ('road', <Noun_number.SINGULAR: 1>)]
  8. Now, let’s do the same using the large model. If you have not yet downloaded the large model, do so by running the first line. Otherwise, you can comment it out. Here, we see that although themorph method still incorrectly assigns singular togeese, thelemma method provides thecorrect answer:
    !python -m spacy download en_core_web_lglarge_model = spacy.load("en_core_web_lg")nouns = get_nouns_number(text, large_model, "morph")print(nouns)nouns = get_nouns_number(text, large_model)print(nouns)

    The result should beas follows:

    [('geese', <Noun_number.SINGULAR: 1>), ('road', <Noun_number.SINGULAR: 1>)][('geese', <Noun_number.PLURAL: 2>), ('road', <Noun_number.SINGULAR: 1>)]
  9. Let’s now use GPT-3.5 to get the nounnumber. In the results, we see that GPT-3.5 gives us an identicalresult and correctly identifies both the number forgeese and the numberforroad:
    from openai import OpenAIclient = OpenAI(api_key=OPEN_AI_KEY)prompt="""Decide whether each noun in the following text is singular or plural.Return the list in the format of a python tuple: (word, number). Do not provide any additional explanations.Sentence: Three geese crossed the road."""response = client.chat.completions.create(    model="gpt-3.5-turbo",    temperature=0,    max_tokens=256,    top_p=1.0,    frequency_penalty=0,    presence_penalty=0,    messages=[        {"role": "system", "content": "You are a helpful             assistant."},        {"role": "user", "content": prompt}    ],)print(response.choices[0].message.content)

    The result should beas follows:

    ('geese', 'plural')('road', 'singular')

There’s more…

We can also change the nounsfrom plural to singular, and vice versa. We will use thetextblob package for that. The package should be installed automatically via thePoetryenvironment:

  1. Import theTextBlob class fromthe package:
    from textblob import TextBlob
  2. Initialize a list of text variables and process them using theTextBlob class via alist comprehension:
    texts = ["book", "goose", "pen", "point", "deer"]blob_objs = [TextBlob(text) for text in texts]
  3. Use thepluralize function of the object to get the plural. This function returns a list and we access its first element. Printthe result:
    plurals = [blob_obj.words.pluralize()[0]     for blob_obj in blob_objs]print(plurals)

    The result should beas follows:

    ['books', 'geese', 'pens', 'points', 'deer']
  4. Now, we will do the reverse. We use the precedingplurals list to turn the plural nouns intoTextBlob objects:
    blob_objs = [TextBlob(text) for text in plurals]
  5. Turn the nouns into singular using thesingularize functionand print:
    singulars = [blob_obj.words.singularize()[0]     for blob_obj in blob_objs]print(singulars)

    The result should be the same as the list we started with instep 2:

    ['book', 'goose', 'pen', 'point', 'deer']

Getting the dependency parse

A dependency parse is a tool that shows dependencies in a sentence. For example, in the sentenceThe cat wore a hat, the root of the sentence is the verb,wore, and both the subject,the cat, and the object,a hat, are dependents. The dependency parse can be very useful in many NLP tasks since it shows the grammatical structure of the sentence, with the subject, the main verb, the object, and so on. It can then be used indownstream processing.

ThespaCy NLP engine does the dependencyparse as part of its overall analysis. The dependency parse tags explain the role of each word in the sentence.ROOT is the main word that all other words depend on, usuallythe verb.

Getting ready

We will usespaCy to create the dependency parse. The required packages are part of thePoetry environment.

How to do it…

We will take a few sentences from thesherlock_holmes1.txt file to illustrate the dependency parse. The steps areas follows:

  1. Run the file and languageutility notebooks:
    %run -i "../util/file_utils.ipynb"%run -i "../util/lang_utils.ipynb"
  2. Define the sentence we willbe parsing:
    sentence = 'I have seldom heard him mention her under any other name.'
  3. Define a function that will print the word, its grammatical function embedded in thedep_ attribute, and the explanation of that attribute. Thedep_ attribute of theToken object shows the grammatical function of the word inthe sentence:
    def print_dependencies(sentence, model):    doc = model(sentence)    for token in doc:        print(token.text, "\t", token.dep_, "\t",             spacy.explain(token.dep_))
  4. Now, let’s use this function on the first sentence in our list. We can see that the verbheard is theROOT word of the sentence, with all other words dependingon it:
    print_dependencies(sentence, small_model)

    The result should beas follows:

    I    nsubj    nominal subjecthave    aux    auxiliaryseldom    advmod    adverbial modifierheard    ROOT    roothim    nsubj    nominal subjectmention    ccomp    clausal complementher    dobj    direct objectunder    prep    prepositional modifierany    det    determinerother    amod    adjectival modifiername    pobj    object of preposition.    punct    punctuation
  5. To explore the dependency parse structure, we can use the attributes of theToken class. Using theancestors andchildren attributes, we can get the tokens that this token depends on and the tokens that depend on it, respectively. The function to print the ancestors isas follows:
    def print_ancestors(sentence, model):    doc = model(sentence)    for token in doc:        print(token.text, [t.text for t in token.ancestors])
  6. Now, let’s use this function on the first sentence inour list:
    print_ancestors(sentence, small_model)

    The output will be as follows. In the result, we see thatheard has no ancestors since it is the main word in the sentence. All other words depend on it, and in fact, containheard in theirancestor lists.

    The dependency chain can be seen by following the ancestor links for each word. For example, if we look at the wordname, we see that its ancestors areunder,mention, andheard. The immediate parent ofname isunder, the parent ofunder ismention, and the parent ofmention isheard. A dependency chain will always lead to the root, or the main word, ofthe sentence:

    I ['heard']have ['heard']seldom ['heard']heard []him ['mention', 'heard']mention ['heard']her ['mention', 'heard']under ['mention', 'heard']any ['name', 'under', 'mention', 'heard']other ['name', 'under', 'mention', 'heard']name ['under', 'mention', 'heard']. ['heard']
  7. To see all the children, use the following function. This function prints out each word and the words that depend on it,itschildren:
    def print_children(sentence, model):    doc = model(sentence)    for token in doc:        print(token.text,[t.text for t in token.children])
  8. Now, let’s use this function on the first sentence inour list:
    print_children(sentence, small_model)

    The result should be as follows. Now, the wordheard has a list of words that depend on it since it is the main word inthe sentence:

    I []have []seldom []heard ['I', 'have', 'seldom', 'mention', '.']him []mention ['him', 'her', 'under']her []under ['name']any []other []name ['any', 'other']. []
  9. We can also see left and right children in separate lists. In the following function, we print the childrenas two separate lists, left and right. This can be useful when doing grammatical transformations inthe sentence:
    def print_lefts_and_rights(sentence, model):    doc = model(sentence)    for token in doc:        print(token.text,            [t.text for t in token.lefts],            [t.text for t in token.rights])
  10. Let’s use this function on the first sentence inour list:
    print_lefts_and_rights(sentence, small_model)

    The result should beas follows:

    I [] []have [] []seldom [] []heard ['I', 'have', 'seldom'] ['mention', '.']him [] []mention ['him'] ['her', 'under']her [] []under [] ['name']any [] []other [] []name ['any', 'other'] []. [] []
  11. We can also see thesubtree that the token is in by usingthis function:
    def print_subtree(sentence, model):    doc = model(sentence)    for token in doc:        print(token.text, [t.text for t in token.subtree])
  12. Let’s use this function on the first sentence inour list:
    print_subtree(sentence, small_model)

    The result should be as follows. From the subtrees that each word is part of, we can see the grammatical phrases that appear in the sentence, such as thenoun phrase,any other name, and theprepositional phrase,under anyother name:

    I ['I']have ['have']seldom ['seldom']heard ['I', 'have', 'seldom', 'heard', 'him', 'mention', 'her', 'under', 'any', 'other', 'name', '.']him ['him']mention ['him', 'mention', 'her', 'under', 'any', 'other', 'name']her ['her']under ['under', 'any', 'other', 'name']any ['any']other ['other']name ['any', 'other', 'name']. ['.']

See also

The dependency parse can be visualized graphically using thedisplaCy package, which is part ofspaCy. Please seeChapter 87,Visualizing Text Data, for a detailed recipe on how to dothe visualization.

Extracting noun chunks

Noun chunks are known in linguistics as noun phrases. They represent nouns and any words that depend on and accompany nouns. For example, in the sentenceThe big red apple fell on the scared cat, the noun chunks arethe big red apple andthe scared cat. Extracting these noun chunks is instrumental to many other downstream NLP tasks, such as named entity recognition and processing entities and relations between them. In this recipe, we will explore how to extract named entities froma text.

Getting ready

We will use thespaCy package, which has a function for extracting noun chunks, and the text from thesherlock_holmes_1.txt file asan example.

How to do it…

Use the following steps to get the noun chunks froma text:

  1. Run the file and languageutility notebooks:
    %run -i "../util/file_utils.ipynb"%run -i "../util/lang_utils.ipynb"
  2. Define the function that will print out the noun chunks. The noun chunks are contained in thedoc.noun_chunksclass variable:
    def print_noun_chunks(text, model):    doc = model(text)    for noun_chunk in doc.noun_chunks:        print(noun_chunk.text)
  3. Read the text from thesherlock_holmes_1.txt file and use the function on theresulting text:
    sherlock_holmes_part_of_text = read_text_file("../data/sherlock_holmes_1.txt")print_noun_chunks(sherlock_holmes_part_of_text, small_model)

    This is the partial result. See the output of the notebook athttps://github.com/PacktPublishing/Python-Natural-Language-Processing-Cookbook-Second-Edition/blob/main/Chapter02/noun_chunks_2.3.ipynbfor the full printout. The function gets the pronouns, nouns, and noun phrases that are in thetext correctly:

    Sherlock Holmesshethe_ womanIhimherany other namehis eyesshethe whole…

There’s more…

Noun chunks arespaCySpan objects and have all their properties. See the official documentationathttps://spacy.io/api/token.

Let’s explore some properties ofnoun chunks:

  1. We will define a function that will print out the different properties of noun chunks. It will print the text of the noun chunk, its start and end indices within theDoc object, the sentence it belongs to (useful when there is more than one sentence), the root of the noun chunk (its main word), and the chunk’s similarity to the wordemotions. Finally, it will print out the similarity of the whole input sentencetoemotions:
    def explore_properties(sentence, model):    doc = model(sentence)    other_span = "emotions"    other_doc = model(other_span)    for noun_chunk in doc.noun_chunks:        print(noun_chunk.text)        print("Noun chunk start and end", "\t",            noun_chunk.start, "\t", noun_chunk.end)        print("Noun chunk sentence:", noun_chunk.sent)        print("Noun chunk root:", noun_chunk.root.text)        print(f"Noun chunk similarity to '{other_span}'",            noun_chunk.similarity(other_doc))    print(f"Similarity of the sentence '{sentence}' to         '{other_span}':",        doc.similarity(other_doc))
  2. Set the sentence toAll emotions, and that one particularly, were abhorrent to his cold, precise but admirablybalanced mind:
    sentence = "All emotions, and that one particularly, were abhorrent to his cold, precise but admirably balanced mind."
  3. Use theexplore_properties function onthe sentence using thesmall model:
    explore_properties(sentence, small_model)

    This isthe result:

    All emotionsNoun chunk start and end    0    2Noun chunk sentence: All emotions, and that one particularly, were abhorrent to his cold, precise but admirably balanced mind.Noun chunk root: emotionsNoun chunk similarity to 'emotions' 0.4026421588260174his cold, precise but admirably balanced mindNoun chunk start and end    11    19Noun chunk sentence: All emotions, and that one particularly, were abhorrent to his cold, precise but admirably balanced mind.Noun chunk root: mindNoun chunk similarity to 'emotions' -0.036891259527462Similarity of the sentence 'All emotions, and that one particularly, were abhorrent to his cold, precise but admirably balanced mind.' to 'emotions': 0.03174900767577446

    You will also see a warning messagesimilar to this one due to the fact that the small model does not ship with word vectors ofits own:

    /tmp/ipykernel_1807/2430050149.py:10: UserWarning: [W007] The model you're using has no word vectors loaded, so the result of the Span.similarity method will be based on the tagger, parser and NER, which may not give useful similarity judgements. This may happen if you're using one of the small models, e.g. `en_core_web_sm`, which don't ship with word vectors and only use context-sensitive tensors. You can always add your own word vectors, or use one of the larger models instead if available.  print(f"Noun chunk similarity to '{other_span}'", noun_chunk.similarity(other_doc))
  4. Now, let’s apply the same function to the same sentence with thelarge model:
    sentence = "All emotions, and that one particularly, were abhorrent to his cold, precise but admirably balanced mind."explore_properties(sentence, large_model)

    The large model does come with its own word vectors and does not result ina warning:

    All emotionsNoun chunk start and end    0    2Noun chunk sentence: All emotions, and that one particularly, were abhorrent to his cold, precise but admirably balanced mind.Noun chunk root: emotionsNoun chunk similarity to 'emotions' 0.6302678068015664his cold, precise but admirably balanced mindNoun chunk start and end    11    19Noun chunk sentence: All emotions, and that one particularly, were abhorrent to his cold, precise but admirably balanced mind.Noun chunk root: mindNoun chunk similarity to 'emotions' 0.5744456705692561Similarity of the sentence 'All emotions, and that one particularly, were abhorrent to his cold, precise but admirably balanced mind.' to 'emotions': 0.640366414527618

    We see that the similarity of theAll emotions noun chunk is high in relation to the wordemotions, as compared to the similarity of thehis cold, precise but admirably balanced mindnoun chunk.

Important note

A largerspaCy model, such asen_core_web_lg, takes up more space but ismore precise.

See also

The topic of semantic similarity will be explored in more detail inChapter 3.

Extracting subjects and objects of the sentence

Sometimes, we might need to findthe subject and direct objects of the sentence, and that is easily accomplished with thespaCy package.

Getting ready

We will be using the dependency tags fromspaCy to find subjects and objects. The code uses thespaCy engine to parse the sentence. Then, the subject function loops through the tokens, and if the dependency tag containssubj, it returns that token’s subtree, aSpan object. There are different subject tags, includingnsubj for regular subjects andnsubjpass for subjects of passive sentences, thus we want to lookfor both.

How to do it…

We will use thesubtree attribute of tokens to find the complete noun chunk that is the subject or direct object of the verb (see theGetting the dependency parse recipe). We will define functions to find the subject, direct object, dative phrase, andprepositional phrases:

  1. Run the file and languageutility notebooks:
    %run -i "../util/file_utils.ipynb"%run -i "../util/lang_utils.ipynb"
  2. We will use two functions to find the subject and the direct object of the sentence. These functions will loop through the tokens and return the subtree that contains the token withsubj ordobj in the dependency tag, respectively. Here is the subject function. It looks for the token that has a dependency tag that containssubj and then returns the subtree that contains that token. There are several subject dependency tags, includingnsubj andnsubjpass (for the subject of a passive sentence), so we look for the mostgeneral pattern:
    def get_subject_phrase(doc):    for token in doc:        if ("subj" in token.dep_):            subtree = list(token.subtree)            start = subtree[0].i            end = subtree[-1].i + 1            return doc[start:end]
  3. Here is the direct object function. It works similarly toget_subject_phrase but looks for thedobj dependency tag instead of a tag that containssubj. If the sentence does not have a direct object, it willreturnNone:
    def get_object_phrase(doc):    for token in doc:        if ("dobj" in token.dep_):            subtree = list(token.subtree)            start = subtree[0].i            end = subtree[-1].i + 1            return doc[start:end]
  4. Assign a list of sentences to a variable, loop through them, and use the preceding functions to print out their subjectsand objects:
    sentences = [    "The big black cat stared at the small dog.",    "Jane watched her brother in the evenings.",    "Laura gave Sam a very interesting book."]for sentence in sentences:    doc = small_model(sentence)    subject_phrase = get_subject_phrase(doc)    object_phrase = get_object_phrase(doc)    print(sentence)    print("\tSubject:", subject_phrase)    print("\tDirect object:", object_phrase)

    The result will be as follows. Since the first sentence does not have a direct object,None is printed out. For thesentenceThe big black cat stared at the small dog, the subject isthe big black cat and there is no direct object (the small dog is the object of the prepositionat). For the sentenceJane watched her brother in the evenings, the subject isJane and the direct object isher brother. In the sentenceLaura gave Sam a very interesting book, the subject isLaura and the direct object isa veryinteresting book:

    The big black cat stared at the small dog.  Subject: The big black cat  Direct object: NoneJane watched her brother in the evenings.  Subject: Jane  Direct object: her brotherLaura gave Sam a very interesting book.  Subject: Laura  Direct object: a very interesting book

There’s more…

We can look for other objects, for example, the dative objects of verbs such asgive and objects of prepositional phrases. The functions will look very similar, with the main difference being the dependency tags:dative for the dative object function, andpobj for the prepositional object function. The prepositional object function will return a list since there can be more than one prepositional phrase ina sentence:

  1. The dative object function checks the tokens for thedative tag. It returnsNone if there are nodative objects:
    def get_dative_phrase(doc):    for token in doc:        if ("dative" in token.dep_):            subtree = list(token.subtree)            start = subtree[0].i            end = subtree[-1].i + 1            return doc[start:end]
  2. We can also combine the subject, object, and dative functions into one with an argument that specifies which object tolook for:
    def get_phrase(doc, phrase):    # phrase is one of "subj", "obj", "dative"    for token in doc:        if (phrase in token.dep_):            subtree = list(token.subtree)            start = subtree[0].i            end = subtree[-1].i + 1            return doc[start:end]
  3. Let us now define a sentence with a dative object and run the function for all three typesof phrases:
    sentence = "Laura gave Sam a very interesting book."doc = small_model(sentence)subject_phrase = get_phrase(doc, "subj")object_phrase = get_phrase(doc, "obj")dative_phrase = get_phrase(doc, "dative")print(sentence)print("\tSubject:", subject_phrase)print("\tDirect object:", object_phrase)print("\tDative object:", dative_phrase)

    The result will be as follows. The dative objectisSam:

    Laura gave Sam a very interesting book.  Subject: Laura  Direct object: a very interesting book  Dative object: Sam
  4. Here is the prepositional object function. It returns a list of objects of prepositions, which will be empty if thereare none:
    def get_prepositional_phrase_objs(doc):    prep_spans = []    for token in doc:        if ("pobj" in token.dep_):            subtree = list(token.subtree)            start = subtree[0].i            end = subtree[-1].i + 1            prep_spans.append(doc[start:end])    return prep_spans
  5. Let’s define a list of sentences and run the two functionson them:
    sentences = [    "The big black cat stared at the small dog.",    "Jane watched her brother in the evenings."]for sentence in sentences:    doc = small_model(sentence)    subject_phrase = get_phrase(doc, "subj")    object_phrase = get_phrase(doc, "obj")    dative_phrase = get_phrase(doc, "dative")    prepositional_phrase_objs = \        get_prepositional_phrase_objs(doc)    print(sentence)    print("\tSubject:", subject_phrase)    print("\tDirect object:", object_phrase)    print("\tPrepositional phrases:", prepositional_phrase_objs)

    The result will beas follows:

    The big black cat stared at the small dog.  Subject: The big black cat  Direct object: the small dog  Prepositional phrases: [the small dog]Jane watched her brother in the evenings.  Subject: Jane  Direct object: her brother  Prepositional phrases: [the evenings]

    There is one prepositional phrase in each sentence. In the sentenceThe big black cat stared at the small dog, it isat the small dog, and in the sentenceJane watched her brother in the evenings, it isinthe evenings.

It is left as an exercise for you to find the actual prepositional phrases with prepositions intact instead of just the noun phrases that are dependent onthese prepositions.

Finding patterns in text using grammatical information

In this section, we will use thespaCyMatcher object to find patterns in the text. We will use the grammatical properties of the words to create these patterns. For example, we might be looking for verb phrases instead of noun phrases. We can specify grammatical patterns to matchverb phrases.

Getting ready

We will be using thespaCyMatcher object to specify and find patterns. It can match different properties, not just grammatical. You can find out more in the documentationathttps://spacy.io/usage/rule-based-matching/.

How to do it…

Your steps should be formattedlike so:

  1. Run the file and languageutility notebooks:
    %run -i "../util/file_utils.ipynb"%run -i "../util/lang_utils.ipynb"
  2. Import theMatcher object and initialize it. We need to put in the vocabulary object, which is the same as the vocabulary of the model we will be using to processthe text:
    from spacy.matcher import Matchermatcher = Matcher(small_model.vocab)
  3. Create a list of patterns and add them to the matcher. Each pattern is a list of dictionaries, where each dictionary describes a token. In our patterns, we only specify the part of speech for each token. We then add these patterns to theMatcher object. The patterns we will be using are a verb by itself (for example,paints), an auxiliary followed by a verb (for example,was observing), an auxiliary followed by an adjective (for example,were late), and an auxiliary followed by a verb and a preposition (for example,were staring at). This is not an exhaustive list; feel free to come up withother examples:
    patterns = [    [{"POS": "VERB"}],    [{"POS": "AUX"}, {"POS": "VERB"}],    [{"POS": "AUX"}, {"POS": "ADJ"}],    [{"POS": "AUX"}, {"POS": "VERB"}, {"POS": "ADP"}]]matcher.add("Verb", patterns)
  4. Read in the small part of theSherlock Holmes text and process it using thesmall model:
    sherlock_holmes_part_of_text = read_text_file("../data/sherlock_holmes_1.txt")doc = small_model(sherlock_holmes_part_of_text)
  5. Now, we find the matches using theMatcher object and the processed text. We then loop through the matches and print out the match ID, the string ID (the identifier of the pattern), the start and end of the match, and the text ofthe match:
    matches = matcher(doc)for match_id, start, end in matches:    string_id = small_model.vocab.strings[match_id]    span = doc[start:end]    print(match_id, string_id, start, end, span.text)

    The result will beas follows:

    14677086776663181681 Verb 14 15 heard14677086776663181681 Verb 17 18 mention14677086776663181681 Verb 28 29 eclipses14677086776663181681 Verb 31 32 predominates14677086776663181681 Verb 43 44 felt14677086776663181681 Verb 49 50 love14677086776663181681 Verb 63 65 were abhorrent14677086776663181681 Verb 80 81 take14677086776663181681 Verb 88 89 observing14677086776663181681 Verb 94 96 has seen14677086776663181681 Verb 95 96 seen14677086776663181681 Verb 103 105 have placed14677086776663181681 Verb 104 105 placed14677086776663181681 Verb 114 115 spoke14677086776663181681 Verb 120 121 save14677086776663181681 Verb 130 132 were admirable14677086776663181681 Verb 140 141 drawing14677086776663181681 Verb 153 154 trained14677086776663181681 Verb 157 158 admit14677086776663181681 Verb 167 168 adjusted14677086776663181681 Verb 171 172 introduce14677086776663181681 Verb 173 174 distracting14677086776663181681 Verb 178 179 throw14677086776663181681 Verb 228 229 was

The code finds some of the verbphrases in the text. Sometimes, it finds a partial match that is part of another match. Weeding out these partial matches is left asan exercise.

See also

We can use other attributes apart from parts of speech. It is possible to match on the text itself, its length, whether it is alphanumeric, the punctuation, the word’s case, thedep_ andmorph attributes, lemma, entity type, and others. It is also possible to use regular expressions on the patterns. Formore information, see the spaCydocumentation:https://spacy.io/usage/rule-based-matching.

Left arrow icon

Page1 of 7

Right arrow icon
Download code iconDownload Code

Key benefits

  • Leverage ready-to-use recipes with the latest LLMs, including Mistral, Llama, and OpenAI models
  • Use LLM-powered agents for custom tasks and real-world interactions
  • Gain practical, in-depth knowledge of transformers and their role in implementing various NLP tasks with open-source and advanced LLMs
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

Harness the power of Natural Language Processing (NLP) to overcome real-world text analysis challenges with this recipe-based roadmap written by two seasoned NLP experts with vast experience transforming various industries with their NLP prowess.You’ll be able to make the most of the latest NLP advancements, including large language models (LLMs), and leverage their capabilities through Hugging Face transformers. Through a series of hands-on recipes, you’ll master essential techniques such as extracting entities and visualizing text data. The authors will expertly guide you through building pipelines for sentiment analysis, topic modeling, and question-answering using popular libraries like spaCy, Gensim, and NLTK. You’ll also learn to implement RAG pipelines to draw out precise answers from a text corpus using LLMs.This second edition expands your skillset with new chapters on cutting-edge LLMs like GPT-4, Natural Language Understanding (NLU), and Explainable AI (XAI)—fostering trust in your NLP models.By the end of this book, you'll be equipped with the skills to apply advanced text processing techniques, use pre-trained transformer models, build custom NLP pipelines to extract valuable insights from text data to drive informed decision-making.

Who is this book for?

This updated edition of the Python Natural Language Processing Cookbook is for data scientists, machine learning engineers, and developers with a background in Python. Whether you’re looking to learn NLP techniques, extract valuable insights from textual data, or create foundational applications, this book will equip you with basic to intermediate skills. No prior NLP knowledge is necessary to get started. All you need is familiarity with basic programming principles. For seasoned developers, the updated sections offer the latest on transformers, explainable AI, and Generative AI with LLMs.

What you will learn

  • Understand fundamental NLP concepts along with their applications using examples in Python
  • Classify text quickly and accurately with rule-based and supervised methods
  • Train NER models and perform sentiment analysis to identify entities and emotions in text
  • Explore topic modeling and text visualization to reveal themes and relationships within text
  • Leverage Hugging Face and OpenAI LLMs to perform advanced NLP tasks
  • Use question-answering techniques to handle both open and closed domains
  • Apply XAI techniques to better understand your model predictions

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date :Sep 13, 2024
Length:312 pages
Edition :2nd
Language :English
ISBN-13 :9781803241449
Vendor :
Google
Category :
Languages :

What do you get with eBook?

Product feature iconInstant access to your Digital eBook purchase
Product feature icon Download this book inEPUB andPDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature iconDRM FREE - Read whenever, wherever and however you want
Product feature iconAI Assistant (beta) to help accelerate your learning
OR

Contact Details

Modal Close icon
Payment Processing...
tickCompleted

Billing Address

Product Details

Publication date :Sep 13, 2024
Length:312 pages
Edition :2nd
Language :English
ISBN-13 :9781803241449
Vendor :
Google
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99billed monthly
Feature tick iconUnlimited access to Packt's library of 7,000+ practical books and videos
Feature tick iconConstantly refreshed with 50+ new titles a month
Feature tick iconExclusive Early access to books as they're written
Feature tick iconSolve problems while you work with advanced search and reference features
Feature tick iconOffline reading on the mobile app
Feature tick iconSimple pricing, no contract
€189.99billed annually
Feature tick iconUnlimited access to Packt's library of 7,000+ practical books and videos
Feature tick iconConstantly refreshed with 50+ new titles a month
Feature tick iconExclusive Early access to books as they're written
Feature tick iconSolve problems while you work with advanced search and reference features
Feature tick iconOffline reading on the mobile app
Feature tick iconChoose a DRM-free eBook or Video every month to keep
Feature tick iconPLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick iconExclusive print discounts
€264.99billed in 18 months
Feature tick iconUnlimited access to Packt's library of 7,000+ practical books and videos
Feature tick iconConstantly refreshed with 50+ new titles a month
Feature tick iconExclusive Early access to books as they're written
Feature tick iconSolve problems while you work with advanced search and reference features
Feature tick iconOffline reading on the mobile app
Feature tick iconChoose a DRM-free eBook or Video every month to keep
Feature tick iconPLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick iconExclusive print discounts

Frequently bought together


Building LLM Powered  Applications
Building LLM Powered Applications
Read more
May 2024342 pages
Full star icon4.2 (22)
eBook
eBook
€26.98€29.99
€37.99
Python Natural Language Processing Cookbook
Python Natural Language Processing Cookbook
Read more
Sep 2024312 pages
Full star icon5 (5)
eBook
eBook
€23.99€26.99
€33.99
Generative AI Application Integration Patterns
Generative AI Application Integration Patterns
Read more
Sep 2024218 pages
eBook
eBook
€26.98€29.99
€37.99
Stars icon
Total109.97
Building LLM Powered  Applications
€37.99
Python Natural Language Processing Cookbook
€33.99
Generative AI Application Integration Patterns
€37.99
Total109.97Stars icon

Table of Contents

12 Chapters
Chapter 1: Learning NLP BasicsChevron down iconChevron up icon
Chapter 1: Learning NLP Basics
Technical requirements
Dividing text into sentences
Dividing sentences into words – tokenization
Part of speech tagging
Combining similar words – lemmatization
Removing stopwords
Chapter 2: Playing with GrammarChevron down iconChevron up icon
Chapter 2: Playing with Grammar
Technical requirements
Counting nouns – plural and singular nouns
Getting the dependency parse
Extracting noun chunks
Extracting subjects and objects of the sentence
Finding patterns in text using grammatical information
Chapter 3: Representing Text – Capturing SemanticsChevron down iconChevron up icon
Chapter 3: Representing Text – Capturing Semantics
Technical requirements
Creating a simple classifier
Putting documents into a bag of words
Constructing an N-gram model
Representing texts with TF-IDF
Using word embeddings
Training your own embeddings model
Using BERT and OpenAI embeddings instead of 
word embeddings
Retrieval augmented generation (RAG)
Chapter 4: Classifying TextsChevron down iconChevron up icon
Chapter 4: Classifying Texts
Technical requirements
Getting the dataset and evaluation ready
Performing rule-based text classification using keywords
Clustering sentences using K-Means – unsupervised 
text classification
Using SVMs for supervised text classification
Training a spaCy model for supervised text classification
Classifying texts using OpenAI models
Chapter 5: Getting Started with Information ExtractionChevron down iconChevron up icon
Chapter 5: Getting Started with Information Extraction
Technical requirements
Using regular expressions
Finding similar strings – Levenshtein distance
Extracting keywords
Performing named entity recognition using spaCy
Training your own NER model with spaCy
Fine-tuning BERT for NER
Chapter 6: Topic ModelingChevron down iconChevron up icon
Chapter 6: Topic Modeling
Technical requirements
LDA topic modeling with gensim
Community detection clustering with SBERT
K-Means topic modeling with BERT
Topic modeling using BERTopic
Using contextualized topic models
Chapter 7: Visualizing Text DataChevron down iconChevron up icon
Chapter 7: Visualizing Text Data
Technical requirements
Visualizing the dependency parse
Visualizing parts of speech
Visualizing NER
Creating a confusion matrix plot
Constructing word clouds
Visualizing topics from Gensim
Visualizing topics from BERTopic
Chapter 8: Transformers and Their ApplicationsChevron down iconChevron up icon
Chapter 8: Transformers and Their Applications
Technical requirements
Loading a dataset
Tokenizing the text in your dataset
Classifying text
Using a zero-shot classifier
Generating text
Language translation
Chapter 9: Natural Language UnderstandingChevron down iconChevron up icon
Chapter 9: Natural Language Understanding
Technical requirements
Answering questions from a short text passage
Answering questions from a long text passage
Answering questions from a document corpus in an extractive manner
Answering questions from a document corpus in an abstractive manner
Summarizing text using pre-trained models based on Transformers
Detecting sentence entailment
Enhancing explainability via a classifier-invariant approach
Enhancing explainability via text generation
Chapter 10: Generative AI and Large Language ModelsChevron down iconChevron up icon
Chapter 10: Generative AI and Large Language Models
Technical requirements
Running an LLM locally
Running an LLM to follow instructions
Augmenting an LLM with external data
Creating a chatbot using an LLM
Generating code using an LLM
Generating a SQL query using human-defined requirements
Agents – making an LLM to reason and act
Using OpenAI models instead of local ones
IndexChevron down iconChevron up icon
Index
Why subscribe?
Other Books You May EnjoyChevron down iconChevron up icon
Other Books You May Enjoy
Packt is searching for authors like you
Share Your Thoughts
Download a free PDF copy of this book

Recommendations for you

Left arrow icon
LLM Engineer's Handbook
LLM Engineer's Handbook
Read more
Oct 2024522 pages
Full star icon4.9 (28)
eBook
eBook
€43.99
€54.99
Getting Started with Tableau 2018.x
Getting Started with Tableau 2018.x
Read more
Sep 2018396 pages
Full star icon4 (3)
eBook
eBook
€28.99€32.99
€41.99
Python for Algorithmic Trading Cookbook
Python for Algorithmic Trading Cookbook
Read more
Aug 2024404 pages
Full star icon4.2 (20)
eBook
eBook
€31.99€35.99
€44.99
RAG-Driven Generative AI
RAG-Driven Generative AI
Read more
Sep 2024338 pages
Full star icon4.3 (18)
eBook
eBook
€28.99€32.99
€40.99
Machine Learning with PyTorch and Scikit-Learn
Machine Learning with PyTorch and Scikit-Learn
Read more
Feb 2022774 pages
Full star icon4.4 (96)
eBook
eBook
€28.99€32.99
€41.99
€59.99
Building LLM Powered  Applications
Building LLM Powered Applications
Read more
May 2024342 pages
Full star icon4.2 (22)
eBook
eBook
€26.98€29.99
€37.99
Python Machine Learning By Example
Python Machine Learning By Example
Read more
Jul 2024518 pages
Full star icon4.9 (9)
eBook
eBook
€24.99€27.99
€34.99
AI Product Manager's Handbook
AI Product Manager's Handbook
Read more
Nov 2024484 pages
eBook
eBook
€23.99€26.99
€33.99
Right arrow icon

Customer reviews

Rating distribution
Full star iconFull star iconFull star iconFull star iconFull star icon5
(5 Ratings)
5 star100%
4 star0%
3 star0%
2 star0%
1 star0%
Amazon CustomerOct 26, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
This book is a remarkable resource for anyone eager to dive into the world of Natural Language Processing (NLP). Authored by two seasoned NLP experts, this book offers a recipe-based approach that effectively addresses real-world text analysis challenges.One of the main strengths of this edition is its focus on the latest advancements in NLP, particularly large language models (LLMs) like GPT-4, and the practical application of these technologies using Hugging Face transformers. The authors provide a wealth of hands-on recipes that guide readers through essential techniques, such as entity extraction, sentiment analysis, and topic modeling, using popular libraries like spaCy, Gensim, and NLTK. This practical approach makes complex concepts accessible, allowing both beginners and seasoned developers to enhance their skills.The addition of new chapters on Natural Language Understanding (NLU) and Explainable AI (XAI) enriches the content, fostering a deeper understanding of model transparency and trustworthiness—an increasingly important aspect of AI applications. By the end of the book, readers will be well-equipped to build custom NLP pipelines and apply advanced techniques to extract valuable insights from text data.
Amazon Verified reviewAmazon
Om SOct 15, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
The Python Natural Language Processing Cookbook offers a hands-on, recipe-based approach to mastering NLP techniques, making it a valuable resource for both beginners and experienced developers. This second edition stands out by introducing the latest in Large Language Models (LLMs), such as GPT-4, Mistral, and Llama, while covering foundational NLP concepts like text classification, topic modeling, and information extraction. With practical examples using popular tools like Hugging Face and OpenAI, the book excels in showcasing how to implement LLM-powered agents and advanced NLP tasks. The new chapters on transformers, explainable AI, and natural language understanding (NLU) make it particularly relevant for anyone eager to dive into cutting-edge NLP technologies. Whether you're just starting out or looking to enhance your expertise, this book provides clear, actionable insights into NLP's evolving landscape.
Amazon Verified reviewAmazon
Advitya GemawatOct 16, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
Throughout the book, I appreciated the infusion of both traditional NLP and GenAI concepts in almost every chapter of the book, which helps in ‘grounding’ GenAI concepts in foundational knowledge (get the pun? :))Here're my top takeaways from the book:📖 The traditional NLP content in Chapters 1 and 3-6 closely mirrors what I studied and utilized during college, making it a relevant resource for college students in similar NLP/AI classes.🏫 The dedicated Chapter 8 on Transformers is likely my top pick as a common topic for interview prep across multiple levels of recent grads and experienced AI practitioners.📊 Among all the content on LLMs, the 2 topics that especially stood out to me were the coding examples on:(1) Running an LLM locally, for quick experimentation and for college students with limited access to cloud resources(2) Building an Agent workflow: The ease of initializing built-in tools to perform actions like internet search as a reasoning step is honestly quite refreshing, due to the democratization of typical Function Calling capabilities that it demonstrates.More than anything, it's especially fun to get a refresher on foundational concepts and then position these in context of using modern tools that help build workflows at a higher level of abstraction.
Amazon Verified reviewAmazon
SASep 13, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
This book is a practical guide for solving NLP problems using Python. The book contains over 60 easy-to-follow recipes that help readers learn everything from basic NLP tasks like tokenization and lemmatization to more advanced topics, including transformers and large language models.The book is useful for a wide range of readers, from data scientists to software developers, because it explains concepts clearly and provides code examples that can be used right away. One of the highlights is that it covers the latest trends, like GPT models and transformers, which makes the book relevant in today’s fast-changing NLP field.Even though the book covers complex topics, it keeps the explanations simple, making it easy to understand and apply. The focus on Python and its libraries, like spaCy and Hugging Face, may limit its appeal for those looking for more general NLP approaches across different platforms.Overall, I found this book to be an excellent resource, motivating me to work on modern NLP projects with Python. It serves both as a learning guide and a useful reference for practical applications.
Amazon Verified reviewAmazon
UdbhavNov 11, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
This book is an absolute must-have for anyone interested in diving into NLP, especially with a focus on large language models (LLMs) and transformers. As a data scientist, I've worked with NLP before, but I found this book incredibly helpful for bridging some of the latest advancements with practical implementation. The authors do a fantastic job breaking down complex concepts into easy-to-understand recipes, which you can directly apply to real-world challenges.Each chapter covers a specific NLP task, from the basics of text classification and sentiment analysis to more advanced techniques like RAG pipelines and Explainable AI (XAI). The section on Hugging Face and OpenAI models is especially valuable—it shows how to utilize the latest transformer models like GPT-4 and Llama. These are techniques I’ve been wanting to implement, and this book gave me the confidence and steps to start using them in my own projects.What sets this book apart is how approachable it is. You don’t need a background in NLP, just a solid grasp of Python. The recipe format is perfect if you’re short on time and want to get straight to the coding. Each recipe is concise but packed with insights, making it easy to go from understanding to implementing.For me, the real bonus was the focus on transparency in model predictions through XAI—it’s rare to find such a strong emphasis on building trust in AI models. Overall, a well-rounded book that has earned a permanent place on my desk. Highly recommend it to both beginners and experienced developers looking to expand their NLP skillset! Read more
Amazon Verified reviewAmazon

About the authors

Left arrow icon
Profile icon Zhenya Antić
Zhenya Antić
LinkedIn icon
Zhenya Antić, Ph.D. is an expert in AI and NLP. She is currently the Director of AI Automation at Arch Insurance, where she leads initiatives in Intelligent Document Processing and applies various AI solutions to complex problems. With extensive consulting experience, Zhenya has worked on numerous NLP projects with various companies. She holds a Ph.D. in Linguistics from the University of California, Berkeley, and a B.S. in Computer Science from the Massachusetts Institute of Technology.
Read more
See other products by Zhenya Antić
Profile icon Saurabh Chakravarty
Saurabh Chakravarty
LinkedIn iconGithub icon
Saurabh Chakravarty, Ph.D. is a seasoned veteran in the software industry with over 20 years ofexperience in software development. A software developer at heart, he is passionate about programming.He has held various roles, including architect, lead engineer, and software developer, specializing in AIand large-scale distributed systems. Saurabh has worked with Microsoft, Rackspace, and Accenture,as well as with a few startups. He holds a Ph.D. in Computer Science with a specialization in NLPfrom Virginia Tech, USA. Saurabh lives in California with his wife, Tina, and daughter, Aaliya, andworks for AWS in Santa Clara, California.
Read more
See other products by Saurabh Chakravarty
Right arrow icon
Getfree access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook?Chevron down iconChevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website?Chevron down iconChevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook?Chevron down iconChevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support?Chevron down iconChevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks?Chevron down iconChevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook?Chevron down iconChevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.


[8]ページ先頭

©2009-2025 Movatter.jp