Transport Layer Transport-layerservices Multiplexing and demultiplexing Connectionless transport: UDP Principles of reliable data transfer Connection-oriented transport: TCP Principles of congestion control TCP congestion control Evolution of transport-layer functionalityCOMPSCI 453 Computer NetworksProfessor Jim KuroseCollege of Information and Computer SciencesUniversity of MassachusettsClass textbook:Computer Networking: A Top-Down Approach (8thed.)J.F. Kurose, K.W. RossPearson, 2020http://gaia.cs.umass.edu/kurose_ross
2.
TCP: overview RFCs:793,1122, 2018, 5681, 7323 cumulative ACKs pipelining:• TCP congestion and flow controlset window size connection-oriented:• handshaking (exchange of controlmessages) initializes sender,receiver state before data exchange flow controlled:• sender will not overwhelm receiver point-to-point:• one sender, one receiver reliable, in-order bytesteam:• no “message boundaries" full duplex data:• bi-directional data flow insame connection• MSS: maximum segment size
3.
2. Reliable, In-OrderByteStream TCP provides:1. Reliable delivery:If a packet is lost or corrupted, TCP retransmits it.2. In-order delivery:Even if packets arrive out of order, TCP reorders thembefore delivering to the application.3. Byte stream:TCP treats all the data as one long stream of bytes, likethis: HELLO WORLD example.Each byte has a sequence number (used forreliability and ordering).So TCP is managing this long “stream of bytes,” notseparate packets.Transport Layer: 3-31. Point-to-PointTCP is point-to-point,meaning:One connection has exactlyone sender and one receiver.Example:When your browser connectsto a web serverthat’s one TCP connectionbetween your computer andthat server.(Unlike UDP, TCP doesn’t doone-to-many or broadcastcommunication.)ByteNumber1 2 3 4 5 6 7 ...Data H E L L O W ...
4.
4. MSS— Maximum SegmentSize MSS means Maximum SegmentSize — the largest chunk of data(in bytes) that TCP can send inone segment. Typical MSS ≈ 1460 bytes (forEthernet, since total frame =1500 bytes). Example:If MSS = 1000 bytes → TCPsends at most 1000 bytes ofuser data per segment.Transport Layer: 3-4 3. Full Duplex Data TCP allows full duplexcommunication — datacan flow in both directionssimultaneously. Example:While your browser issending a request to aweb server, the server canalso start sending databack at the same time.
5.
6. Pipelining TCPallows multiple packets tobe sent before receiving anACK (unlike Stop-and-Wait).This improves efficiency. The number ofunacknowledged packets thatcan be in flight depends on thewindow size (defined by flowcontrol and congestion control).Transport Layer: 3-55. CumulativeAcknowledgments (ACKs)TCP uses cumulative ACKs,which means:The ACK number representsthe next byte expected by thereceiver.Example:If the receiver has receivedbytes 1–500, it will send ACK= 501,meaning “I got everything upto 500, now send me from501.”If packet 501–600 is lost, itkeeps sending ACK = 501 untilit gets that data.
6.
8. Congestion Control(Network Side) Congestion control prevents thenetwork (not just the receiver) fromgetting overloaded.TCP automatically slows down whenit detects congestion. Main algorithms:• Slow Start• Congestion Avoidance• Fast Retransmit• Fast Recovery These help maintain fairness andstability in the Internet.Transport Layer: 3-67. Flow Control (Receiver Side) Flow control ensures thesender doesn’t overwhelmthe receiver. The receiver tells the senderhow much data it can handle— this is called the ReceiveWindow (rwnd). Example:If rwnd = 4000 bytes → sendercan send 4000 bytes totalbefore waiting for an ACK. This prevents the receiver’sbuffer from overflowing.
7.
10. Flow ControlledThesender will not overwhelm thereceiver.TCP adjusts its sending ratebased on:•Receiver’s buffer capacity(flow control)•Network congestion(congestion control)That’s why TCP is often called“reliable and self-adjusting.”Transport Layer: 3-7 9. Connection-Oriented(Handshake) TCP is connection-oriented,meaning it establishes aconnection before sendingdata.This is done using a 3-wayhandshake:1.SYN: Sender requestsconnection2.SYN-ACK: Receiver agrees3.ACK: Sender confirms After this, both sides are readyto exchange data.This handshake initializessequence numbers, buffers,and states.
8.
TCP segment structuresourceport # dest port #32 bitsnotused receive window flow control: # bytesreceiver willing to acceptsequence numbersegment seq #: countingbytes of data into bytestream(not segments!)applicationdata(variable length)data sent byapplication intoTCP socketAacknowledgement numberACK: seq # of next expectedbyte; A bit: this is an ACKoptions (variablelength)TCP optionsheadlenlength (of TCP header)checksumInternet checksumRST, SYN, FIN: connectionmanagementFSRUrg data pointerPUC EC, E: congestion notification
9.
TCP sequence numbers,ACKsSequence numbers:• byte stream “number” offirst byte in segment’s datasource port # dest port #sequence numberacknowledgement numberchecksumrwndurg pointeroutgoing segment from receiverAsentACKedsent, not-yet ACKed(“in-flight”)usablebut notyet sentnotusablewindow sizeNsender sequence number spacesource port # dest port #sequence numberacknowledgement numberchecksumrwndurg pointeroutgoing segment from senderAcknowledgements:• seq # of next byte expectedfrom other side• cumulative ACK
10.
TCP sequence numbers,ACKshost ACKs receiptof echoed ‘C’host ACKs receipt of‘C’,echoes back ‘C’simple telnet scenarioHost BHost AUser types‘C’Seq=42, ACK=79, data = ‘C’Seq=79, ACK=43, data = ‘C’Seq=43, ACK=80
11.
TCP round triptime, timeoutQ: how to set TCP timeoutvalue? longer than RTT, but RTT varies! too short: premature timeout,unnecessary retransmissions too long: slow reaction tosegment lossQ: how to estimate RTT? SampleRTT:measured timefrom segment transmission untilACK receipt• ignore retransmissions SampleRTT will vary, wantestimated RTT “smoother”• average several recentmeasurements, not just currentSampleRTT
12.
TCP round triptime, timeoutEstimatedRTT = (1- )*EstimatedRTT + *SampleRTT exponential weighted moving average (EWMA) influence of past sample decreases exponentially fast typical value: = 0.125RTT: gaia.cs.umass.edu to fantasia.eurecom.fr1001502002503003501 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106time (seconnds)RTT(milliseconds)SampleRTT Estimated RTTRTT(milliseconds)RTT: gaia.cs.umass.edu to fantasia.eurecom.frsampleRTTEstimatedRTTtime(seconds)
13.
TCP round triptime, timeout timeout interval: EstimatedRTT plus “safety margin”• large variation in EstimatedRTT: want a larger safety marginTimeoutInterval = EstimatedRTT + 4*DevRTTestimated RTT “safety margin”* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/DevRTT = (1-)*DevRTT + *|SampleRTT-EstimatedRTT|(typically, = 0.25) DevRTT: EWMA of SampleRTT deviation from EstimatedRTT:
14.
TCP Sender (worksbased on events)event: data received from application•The application (like a browser or email app) givesdata to TCP to send.•TCP creates a segment (packet) to send that data.•TCP adds a sequence number (seq#) →This number shows the byte number of the firstbyte of data in that segment.(So, it helps the receiver keep everything in order.)•TCP starts a timer — but only if it’s not alreadyrunning.•The timer is for the oldest segment that has beensent but not yet acknowledged (ACKed).•The TimeOutInterval is the time TCP waits for anACK before it decides to retransmit.event: timeout• If the timer runs out (no ACK receivedin time), TCP assumes the segment islost.•It retransmits that segment — theone that caused the timeout.•Then it restarts the timer again.event: ACK received•The receiver sends an ACK to confirm it receiveddata.•When the sender gets this ACK, it checks:•Does this ACK cover (acknowledge) any data thatwas still unacknowledged?If yes those segments are now considered→successfully delivered.•TCP then updates its record of which bytes havebeen acknowledged.•If there are still some unacknowledgedsegments, TCP keeps the timer running (orrestarts it).•If everything is acknowledged, it stops the timer.
15.
TCP Receiver: ACKgeneration [RFC 5681]Event at receiverarrival of in-order segment withexpected seq #. All data up toexpected seq # already ACKedarrival of in-order segment withexpected seq #. One othersegment has ACK pendingarrival of out-of-order segmenthigher-than-expect seq. # .Gap detectedarrival of segment thatpartially or completely fills gapTCP receiver actiondelayed ACK. Wait up to 500msfor next segment. If no next segment,send ACKimmediately send single cumulativeACK, ACKing both in-order segmentsimmediately send duplicate ACK,indicating seq. # of next expected byteimmediate send ACK, provided thatsegment starts at lower end of gap
16.
TCP: retransmission scenarioslostACK scenarioHost BHost ASeq=92, 8 bytes of dataSeq=92, 8 bytes of dataACK=100XACK=100timeoutpremature timeoutHost BHost ASeq=92, 8bytes of dataACK=120timeoutACK=100ACK=120SendBase=100SendBase=120SendBase=120Seq=92, 8 bytes of dataSeq=100, 20 bytes of dataSendBase=92send cumulativeACK for 120
17.
TCP: retransmission scenarioscumulativeACKcovers for earlier lostACKHost BHost ASeq=92, 8 bytes of dataSeq=120, 15 bytes of dataSeq=100, 20 bytes of dataXACK=100ACK=120
18.
TCP fast retransmitHostBHost AtimeoutACK=100ACK=100ACK=100ACK=100XSeq=92, 8 bytes of dataSeq=100, 20 bytes ofdataSeq=100, 20 bytes of dataReceipt of three duplicate ACKsindicates 3 segments receivedafter a missing segment – lostsegment is likely. So retransmit!if sender receives 3 ACKs forsame data (“triple duplicateACKs”), resend unACKedsegment with smallest seq # likely that unACKed segmentlost, so don’t wait for timeoutTCP fast retransmit
19.
TCP flow controlapplicationprocessTCPsocketreceiver buffersTCPcodeIPcodereceiver protocol stackQ: What happens if networklayer delivers data faster thanapplication layer removesdata from socket buffers?Network layerdelivering IP datagrampayload into TCPsocket buffersfrom senderApplication removingdata from TCP socketbuffers
20.
TCP flow controlapplicationprocessTCPsocketreceiver buffersTCPcodeIPcodereceiver protocol stackQ: What happens if networklayer delivers data faster thanapplication layer removesdata from socket buffers?Network layerdelivering IP datagrampayload into TCPsocket buffersfrom senderApplication removingdata from TCP socketbuffers
21.
TCP flow controlapplicationprocessTCPsocketreceiver buffersTCPcodeIPcodereceiver protocol stackQ: What happens if networklayer delivers data faster thanapplication layer removesdata from socket buffers?from senderApplication removingdata from TCP socketbuffersreceive windowflow control: # bytesreceiver willing to accept
22.
TCP flow controlapplicationprocessTCPsocketreceiver buffersTCPcodeIPcodereceiver protocol stackQ: What happens if networklayer delivers data faster thanapplication layer removesdata from socket buffers?receiver controls sender, sosender won’t overflowreceiver’s buffer bytransmitting too much, too fastflow controlfrom senderApplication removingdata from TCP socketbuffers
23.
TCP flow controlTCP receiver “advertises” free bufferspace in rwnd field in TCP header• RcvBuffer size set via socketoptions (typical default is 4096 bytes)• many operating systems autoadjustRcvBuffer sender limits amount of unACKed(“in-flight”) data to received rwnd guarantees receive buffer will notoverflowbuffered datafree buffer spacerwndRcvBufferTCP segment payloadsto application processTCP receiver-side buffering
24.
TCP flow controlTCP receiver “advertises” free bufferspace in rwnd field in TCP header• RcvBuffer size set via socketoptions (typical default is 4096 bytes)• many operating systems autoadjustRcvBuffer sender limits amount of unACKed(“in-flight”) data to received rwnd guarantees receive buffer will notoverflowflow control: # bytes receiver willing to acceptreceive windowTCP segment format
25.
TCP connection managementbeforeexchanging data, sender/receiver “handshake”: agree to establish connection (each knowing the other willing to establish connection) agree on connection parameters (e.g., starting seq #s)connection state: ESTABconnection variables:seq # client-to-serverserver-to-clientrcvBuffer sizeat server,clientapplicationnetworkconnection state: ESTABconnection Variables:seq # client-to-serverserver-to-clientrcvBuffer sizeat server,clientapplicationnetworkSocket clientSocket =newSocket("hostname","port number");Socket connectionSocket =welcomeSocket.accept();
26.
Agreeing to establisha connectionQ: will 2-way handshake alwayswork in network? variable delays retransmitted messages (e.g.req_conn(x)) due to message loss message reordering can’t “see” other side2-way handshake:Let’s talkOKESTABESTABchoose xreq_conn(x)ESTABESTABacc_conn(x)
TCP 3-way handshakeSYNbit=1,Seq=xchoose init seq num, xsend TCP SYN msgESTABSYNbit=1, Seq=yACKbit=1; ACKnum=x+1choose init seq num, ysend TCP SYNACKmsg, acking SYNACKbit=1, ACKnum=y+1received SYNACK(x)indicates server is live;send ACK for SYNACK;this segment may containclient-to-server datareceived ACK(y)indicates client is liveSYNSENTESTABSYN RCVDClient stateLISTENServer stateLISTENclientSocket = socket(AF_INET, SOCK_STREAM)serverSocket = socket(AF_INET,SOCK_STREAM)serverSocket.bind((‘’,serverPort))serverSocket.listen(1)connectionSocket, addr = serverSocket.accept()clientSocket.connect((serverName,serverPort))
31.
A human 3-wayhandshake protocol1. On belay?2. Belay on.3. Climbing.
32.
Closing a TCPconnection client, server each close their side of connection• send TCP segment with FIN bit = 1 respond to received FIN with ACK• on receiving FIN, ACK can be combined with own FIN simultaneous FIN exchanges can be handled
33.
Transport Layer Transport-layerservices Multiplexing and demultiplexing Connectionless transport: UDP Principles of reliable data transfer Connection-oriented transport: TCP Principles of congestion control TCP congestion control Evolution of transport-layer functionalityCOMPSCI 453 Computer NetworksProfessor Jim KuroseCollege of Information and Computer SciencesUniversity of MassachusettsClass textbook:Computer Networking: A Top-Down Approach (8thed.)J.F. Kurose, K.W. RossPearson, 2020http://gaia.cs.umass.edu/kurose_rossVideo: 2020, J.F. Kurose, All Rights ReservedPowerpoint: 1996-2020, J.F. Kurose, K.W. Ross, All Rights Reserved
34.
TCP: Transport ControlProtocol segment structure reliable data transfer sequence numbers ACKs timers
35.
TCP sender (simplified)TransportLayer: 3-35waitforeventNextSeqNum = InitialSeqNumSendBase = InitialSeqNumLretransmit not-yet-ackedsegment with smallestseq. #start timertimeoutif (y > SendBase) {SendBase = y/* SendBase–1: last cumulatively ACKed byte */if (there are currently not-yet-acked segments)start timerelse stop timer}ACK received, with ACK field value ycreate segment, seq. #: NextSeqNumpass segment to IP (i.e., “send”)NextSeqNum = NextSeqNum + length(data)if (timer currently not running)start timerdata received from application above
#10 The key thing to note here is that the ACK number (43) on the B-to-A segment is one more than the sequence number (42) on the A-toB segment that triggered that ACKSimilarly, the ACK number (80) on the last A-to-B segment is one more than the sequence number (79) on the B-to-A segment that triggered that ACK
#12 This is how TCP re-computes the estimated RTT each time a new SampleRTT is taken.The process is knows as an exponeitally weighted moving average, shown by the equation here.<say it>Where alpha reflects the influence of the most recent measurements on the estimated RTT; a typical value of alpha used in implementaitons is .125The graph at the bottom show measured RTTs beween a host in the Massachusetts and a host in France, as well as the estimated, “smoothed” RTT
#13 Given this value of the estimated RTT, TCP computes the timeout interval to be the estimated RTT plus a “safety margin” And the intuition is that if we are seeing a large variation in SAMPLERTT – the RTT estimates are fluctuating a lot - then we’ll want a larger savety marginSo TCP computes the Timeout interval to be the Estimated RTT plus 4 times a measure of deviation in the RTT.The deviation in the RTT is computed as the eWMA of the difference between the most recently measured SampleRTT from the Estimated RTT
#14 Given these details of TCP sequence numbers, acks, and timers, we can now describe the big picture view of how the TCP sender and receiver operateYou can check out FSMs in book; let’s just give an English text description here and let’s start with the sender.
#15 Rather than immediately ACKnowledig this segment, many TCP implementations will wait for half a second for another in-order segment to arrive, and then generate a single cumulative ACK for both segments – thus decreasing the amount of ACK traffic. The arrival of this second in-order segment and the cumulative ACK generation that covers both segments is the second row in this table.
#16 To cement our understanding of TCP reliability, let’s look a a few retransmission scenariosIn the first case a TCP segments is transmited and the ACK is lost, and the TCP timeout mechanism results in another copy of being transmitted and then re-ACKed a the senderIn the second example two segments are sent and acknowledged, but there is a premature timeout e for the first segment, which is retransmitted. Notet that when this retransmitted segment is received, the receiver has already received the first two segments, and so resends a cumulative ACK for both segments received so far, rather than an ACK for just this fist segment.
#17 And in this last example, two segments are again transmitted, the first ACK is lost but the second ACK, a cumulative ACK arrives at the sender, which then can transmit a third segment, knowing that the first two have arrived, even though the ACK for the first segment was lost
#18 Let’s wrap up our study of TCP reliability by discussing an optimization to the original TCP known as TCP fast retransmit,Take a look at this example on the right where 5 segments are transmitted and the second segment is lost. In this case the TCP receiver sends an ACK 100 acknowledging the first received segment.When the third segment arrives at the receiver, the TCP receiver sends another ACK 100 since the second segment has not arrived. And similarly for the 4th and 5th segments to arrive.Now what does the sender see? The sender receives the first ACK 100 it has been hoping for, but then three additional duplicate ACK100s arrive. The sender knows that somethings’ wrong – it knows the first segment arrived at the receiver but three later arriving segments at the receiver – the ones that generated the three duplicate ACKs – we received correctly but were not in order. That is, that there was a missing segment at the receiver when each of the three duplicate ACK were generated.With fast retransmit, the arrival of three duplicate ACK causes the sender to retransmit its oldest unACKed segment, without waiting for a timeout event. This allows TCP to recover more quickly from what is very likely a loss eventspecifically that the second segment has been lost, since three higher -numbered segments were received
#19 (Presuming an intro)Before diving into the details of TCP flow control, let’s first get the general context and motivate the need for flow control.This diagram show a typical transport-layer implementationA segment is brought up the protocol stack to the transport layer, and the segment’s payload is removed from the segment and written INTO socket buffers.How does data get taken OUT of socket buffers? By applications performing socket reads, as we learned in Chapter 2.And so the question is “What happens if network layer delivers data faster than an application-layer process removes data from socket buffers?”Let’s watch a video of what happens when things arrive way too fast to fast to be processed.<video>. (I love that video). Another human analogy showing the need for flow control is the saying – to use some English slang - “no one can drink from a firehose”Flow control is a mechanism to the calamity of a receiver being over-run by a sender that is sending too fast – it allows the RECEIVER to explictly control the SENDER so sender won’t overflow receiver’s buffer by transmitting too much, too fast
#20 (Presuming an intro)Before diving into the details of TCP flow control, let’s first get the general context and motivate the need for flow control.This diagram show a typical transport-layer implementationA segment is brought up the protocol stack to the transport layer, and the segment’s payload is removed from the segment and written INTO socket buffers.How does data get taken OUT of socket buffers? By applications performing socket reads, as we learned in Chapter 2.And so the question is “What happens if network layer delivers data faster than an application-layer process removes data from socket buffers?”Let’s watch a video of what happens when things arrive way too fast to fast to be processed.<video>. (I love that video). Another human analogy showing the need for flow control is the saying – to use some English slang - “no one can drink from a firehose”Flow control is a mechanism to the calamity of a receiver being over-run by a sender that is sending too fast – it allows the RECEIVER to explictly control the SENDER so sender won’t overflow receiver’s buffer by transmitting too much, too fast
#21 (Presuming an intro)Before diving into the details of TCP flow control, let’s first get the general context and motivate the need for flow control.This diagram show a typical transport-layer implementationA segment is brought up the protocol stack to the transport layer, and the segment’s payload is removed from the segment and written INTO socket buffers.How does data get taken OUT of socket buffers? By applications performing socket reads, as we learned in Chapter 2.And so the question is “What happens if network layer delivers data faster than an application-layer process removes data from socket buffers?”Let’s watch a video of what happens when things arrive way too fast to fast to be processed.<video>. (I love that video). Another human analogy showing the need for flow control is the saying – to use some English slang - “no one can drink from a firehose”Flow control is a mechanism to the calamity of a receiver being over-run by a sender that is sending too fast – it allows the RECEIVER to explictly control the SENDER so sender won’t overflow receiver’s buffer by transmitting too much, too fast
#22 (Presuming an intro)Before diving into the details of TCP flow control, let’s first get the general context and motivate the need for flow control.This diagram show a typical transport-layer implementationA segment is brought up the protocol stack to the transport layer, and the segment’s payload is removed from the segment and written INTO socket buffers.How does data get taken OUT of socket buffers? By applications performing socket reads, as we learned in Chapter 2.And so the question is “What happens if network layer delivers data faster than an application-layer process removes data from socket buffers?”Let’s watch a video of what happens when things arrive way too fast to fast to be processed.<video>. (I love that video). Another human analogy showing the need for flow control is the saying – to use some English slang - “no one can drink from a firehose”Flow control is a mechanism to the calamity of a receiver being over-run by a sender that is sending too fast – it allows the RECEIVER to explictly control the SENDER so sender won’t overflow receiver’s buffer by transmitting too much, too fast
#23 Here’s how TCP implement flow control. The basic idea is simple – the receiver informs the sender how much free buffer space there is, and the sender is limited to send no more than this amount of data. That the value o RWND in the diagram to the right.This information is carried from the receiver to the sender in the “receiver advertised window” (do a PIP of header) in the TCP header, and the value will change as the amount of free buffer space fluctuates over time.
#24 Here’s how TCP implement flow control. The basic idea is simple – the receiver informs the sender how much free buffer space there is, and the sender is limited to send no more than this amount of data. That the value o RWND in the diagram to the right.This information is carried from the receiver to the sender in the “receiver advertised window” (do a PIP of header) in the TCP header, and the value will change as the amount of free buffer space fluctuates over time.
#25 The other TCP topic we’ll want to consider here is that of “connection management”The TCP sender and reciver have a number of pieces of shared state that they must establish before actually communicationFIRST theym ust both agree that they WANT to communicate with each otherSecondly there are connection parameters – the initial sequence number and the initial receiver-advertised bufferspace that they’ll want to agree onThis is done via a so-called handshake protocol – the client reaching our to the server, and the server answering back.And before diving into the TCP handshake protocol, let’s first consider the problem of handshaking, of establishing shared state.
#26 Here’s an example of a two way handshake. Alice reaches out to Bob and say’s “let’s talk” and Bob says OK, and they start their conversationFor a network protocol, the equivalent protocol would be a client sending a “request connection” message saying ”let’s talk, the initial sequence number is x”And the server would respond with a message ”I accept your connect x”And the question we want to ask ourselves is <talk through>Will this work? Let’s look at a few scenarios…
#30 TCP’s three way handshake, that operates as follows Let’s say the client and server both create a TCP socket as we learned about in Chapter 2 and enter the LISTEN stateThe client then connects to the server sending a SYN message with a sequence number x (SYN Message is an TCP Segment with SYN but set in the header – you might want to go back and review the TCP segment format!)The server is waiting for a connection, and receives the SYN message enters the SYN received state (NOT the established state and sends a SYN ACK message back.Finally the client sends an ACK message to the server, and when the server receiver this enters the ESTABLished state. This is when the application process would see the return from the wait on the socket accept() call
#31 As usual, there’s a human protocol analogy to the three way handshake, and I still remember thinking about this clinging for my life while climbing up a rockfaceWhen you want start climbing you first say ON BELOW (meaning ARE YOU READY WITH MY SAFETY ROPE)THE BELYER (server) responds BELAY ON (that lets you know the belayer is ready for you)And then you say CLIMINGIt’s amazing what can pass through your head when your clinging for your life o a
#32 All good things must come to an end, and that’s true for a TCP connection as well.And of course there’s a protocol for one side to gracefully close of a TCP connection using a FIN message, to which the other side sends a FINACK message and waits around a bit to respond to any retransmitted FIN messages before timing out.