Movatterモバイル変換


[0]ホーム

URL:


— FREE Email Series —

🐍 Python Tricks 💌

Python Tricks Dictionary Merge

🔒 No spam. Unsubscribe any time.

Browse TopicsGuided Learning Paths
Basics Intermediate Advanced
aialgorithmsapibest-practicescareercommunitydatabasesdata-sciencedata-structuresdata-vizdevopsdjangodockereditorsflaskfront-endgamedevguimachine-learningnewsnumpyprojectspythonstdlibtestingtoolsweb-devweb-scraping

Table of Contents

Recommended Course

Split Your Dataset With scikit-learn's train_test_split()

Splitting Datasets With scikit-learn and train_test_split()

33m · 12 lessons

Split Your Dataset With scikit-learn's train_test_split()

Split Your Dataset With scikit-learn's train_test_split()

byMirko StojiljkovićReading time estimate 21mintermediatedata-sciencemachine-learningnumpy

Table of Contents

Remove ads

Recommended Course

Splitting Datasets With scikit-learn and train_test_split()(33m)

Withtrain_test_split() from scikit-learn, you can efficiently divide your dataset into training and testing subsets to ensure unbiased model evaluation in machine learning. This process helps prevent overfitting and underfitting by keeping the test data separate from the training data, allowing you to assess the model’s predictive performance accurately.

By the end of this tutorial, you’ll understand that:

  • train_test_split() is a function insklearn that divides datasets into training and testing subsets.
  • x_train andy_train represent the inputs and outputs of the training data subset, respectively, whilex_test andy_test represent the input and output of the testing data subset.
  • By specifyingtest_size=0.2, you use 20% of the dataset for testing, leaving 80% for training.
  • train_test_split() can handle imbalanced datasets using thestratify parameter to maintain class distribution.

You’ll learn how to usetrain_test_split() and apply these concepts in real-world scenarios, ensuring your machine learning models are evaluated with precision and fairness. In addition, you’ll explore related tools fromsklearn.model_selection for further insights.

Get Your Code:Click here to download the free sample code that you’ll use to learn about splitting your dataset with scikit-learn’s train_test_split().

Take the Quiz: Test your knowledge with our interactive “Split Your Dataset With scikit-learn's train_test_split()” quiz. You’ll receive a score upon completion to help you track your learning progress:


Split Your Dataset With scikit-learn's train_test_split()

Interactive Quiz

Split Your Dataset With scikit-learn's train_test_split()

In this quiz, you'll test your understanding of how to use the train_test_split() function from the scikit-learn library to split your dataset into subsets for unbiased evaluation in machine learning.

The Importance of Data Splitting

Supervised machine learning is about creating models that precisely map the given inputs to the given outputs. Inputs are also called independent variables orpredictors, while outputs may be referred to as dependent variables orresponses.

How you measure the precision of your model depends on the type of a problem you’re trying to solve. Inregression analysis, you typically use thecoefficient of determination,root mean square error,mean absolute error, or similar quantities. Forclassification problems, you often applyaccuracy,precision, recall,F1 score, and related indicators.

The acceptable numeric values that measure precision vary from field to field. You can find detailed explanations fromStatistics By Jim,Quora, and many other resources.

What’s most important to understand is that you usually needunbiased evaluation to properly use these measures, assess the predictive performance of your model, and validate the model.

This means that you can’t evaluate the predictive performance of a model with the same data you used for training. You need evaluate the model withfresh data that hasn’t been seen by the model before. You can accomplish that by splitting your dataset before you use it.

Training, Validation, and Test Sets

Splitting your dataset is essential for an unbiased evaluation of prediction performance. In most cases, it’s enough to split your dataset randomly intothree subsets:

  1. The training set is applied to train orfit your model. For example, you use the training set to find the optimal weights, or coefficients, forlinear regression,logistic regression, orneural networks.

  2. The validation set is used for unbiased model evaluation duringhyperparameter tuning. For example, when you want to find the optimal number of neurons in a neural network or the best kernel for a support vector machine, you experiment with different values. For each considered setting of hyperparameters, you fit the model with the training set and assess its performance with the validation set.

  3. The test set is needed for an unbiased evaluation of the final model. You shouldn’t use it for fitting or validation.

In less complex cases, when you don’t have to tune hyperparameters, it’s okay to work with only the training and test sets.

Underfitting and Overfitting

Splitting a dataset might also be important for detecting if your model suffers from one of two very common problems, calledunderfitting and overfitting:

  1. Underfitting is usually the consequence of a model being unable to encapsulate the relations among data. For example, this can happen when trying to represent nonlinear relations with a linear model. Underfitted models will likely have poor performance with both training and test sets.

  2. Overfitting usually takes place when a model has an excessively complex structure and learns both the existing relations among data and noise. Such models often have bad generalization capabilities. Although they work well with training data, they usually yield poor performance with unseen test data.

You can find a more detailed explanation of underfitting and overfitting inLinear Regression in Python.

Prerequisites for Usingtrain_test_split()

Now that you understand the need to split a dataset in order to perform unbiased model evaluation and identify underfitting or overfitting, you’re ready to learn how to split your own datasets.

You’ll use version 1.5.0 ofscikit-learn, orsklearn. It has many packages for data science and machine learning, but for this tutorial, you’ll focus on themodel_selection package, specifically on the functiontrain_test_split().

Note: While this tutorial is tested with this specific version of scikit-learn, the features that you’ll use are core to the library and should work equivalently in other versions of scikit-learn as well.

You caninstallsklearn withpip:

Shell
$python-mpipinstall"scikit-learn==1.5.0"

If you useAnaconda, then you probably already have it installed. However, if you want to use afresh environment, ensure that you have the specified version or useMiniconda. Then you can installsklearn from Anaconda Cloud withconda install:

Shell
$condainstall-canacondascikit-learn=1.5.0

You’ll also needNumPy, but you don’t have to install it separately. You should get it along withsklearn if you don’t already have it installed. If you want to, you canrefresh your NumPy knowledge and check outNumPy Tutorial: Your First Steps Into Data Science in Python.

Application oftrain_test_split()

You need toimporttrain_test_split() and NumPy before you can use them. You can work in aJupyter notebook or start a newPython REPL session, then you can start with theimport statements:

Python
>>>importnumpyasnp>>>fromsklearn.model_selectionimporttrain_test_split

Now that you have both imported, you can use them to split data into training sets and test sets. You’ll split inputs and outputs at the same time, with a single function call.

Withtrain_test_split(), you only need to provide the arrays that you want to split. Additionally, you can also provide some optional arguments. The function usually returns alist ofNumPy arrays but can also return a couple of otheriterable types, such asSciPy sparse matrices, if appropriate:

sklearn.model_selection.train_test_split(*arrays, **options) -> list

Thearrays parameter in the function signature oftrain_test_split() refers to the sequence oflists,NumPy arrays,pandas DataFrames, or similar array-like objects that hold the data that you want to split. All these objects together make up the dataset and must be of the same length.

In supervised machine learning applications, you’ll typically work with two such arrays:

  1. A two-dimensional array with the inputs (x)
  2. A one-dimensional array with the outputs (y)

Theoptions parameter indicates that you can customize the function’s behavior with optional keyword arguments:

  • train_size is the number that defines the size of the training set. If you provide afloat, then it must be between0.0 and1.0 and it will define the share of the dataset used for testing. If you provide anint, then it will represent the total number of the training samples. The default value isNone.

  • test_size is the number that defines the size of the test set. It’s very similar totrain_size. You should provide eithertrain_size ortest_size. If neither is given, then the default share of the dataset that will be used for testing is0.25, or 25 percent.

  • random_state is the object that controls randomization during splitting. It can be either anint or an instance ofRandomState. Setting the random state is useful if you need reproducibility. The default value isNone.

  • shuffle is theBoolean object that determines whether to shuffle the dataset before applying the split. The default value isTrue.

  • stratify is an array-like object that, if notNone, determines how to use astratified split.

Now it’s time to try data splitting! You’ll start by creating a simple dataset to work with. The dataset will contain the inputs in the two-dimensional arrayx and outputs in the one-dimensional arrayy:

Python
>>>x=np.arange(1,25).reshape(12,2)>>>y=np.array([0,1,1,0,1,0,0,1,1,0,1,0])>>>xarray([[ 1,  2],       [ 3,  4],       [ 5,  6],       [ 7,  8],       [ 9, 10],       [11, 12],       [13, 14],       [15, 16],       [17, 18],       [19, 20],       [21, 22],       [23, 24]])>>>yarray([0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0])

To get your data, you usearange(), which is very convenient for generating arrays based onnumerical ranges. You also use.reshape() to modify the shape of the array returned byarange() and get a two-dimensional data structure.

You can split both input and output datasets with a single function call:

Python
>>>x_train,x_test,y_train,y_test=train_test_split(x,y)>>>x_trainarray([[15, 16],       [21, 22],       [11, 12],       [17, 18],       [13, 14],       [ 9, 10],       [ 1,  2],       [ 3,  4],       [19, 20]])>>>x_testarray([[ 5,  6],       [ 7,  8],       [23, 24]])>>>y_trainarray([1, 1, 0, 1, 0, 1, 0, 1, 0])>>>y_testarray([1, 0, 0])

Given two arrays, likex andy here,train_test_split() performs the split and returns four arrays (in this case NumPy arrays) in this order:

  1. x_train: The training part of the first array (x)
  2. x_test: The test part of the first array (x)
  3. y_train: The training part of the second array (y)
  4. y_test: The test part of the second array (y)

You probably got different results from what you see here. This is because dataset splitting israndom by default. The result differs each time you run the function. However, this often isn’t what you want.

Sometimes, to make your tests reproducible, you need a random split with the same output for each function call. You can do that with the parameterrandom_state. The value ofrandom_state isn’t important—it can be any non-negative integer. You could use an instance ofnumpy.random.RandomState instead, but that’s a more complex approach.

In the previous example, you used a dataset with twelve rows, or observations, and got a training sample with nine rows and a test sample with three rows. That’s because you didn’t specify the desired size of the training and test sets. By default, 25 percent of samples are assigned to the test set. This ratio is generally fine for many applications, but it’s not always what you need.

Typically, you’ll want to define the size of the test or training set explicitly, and sometimes you’ll even want to experiment with different values. You can do that with the parameterstrain_size ortest_size.

Modify the code so you can choose the size of the test set and get a reproducible result:

Python
>>>x_train,x_test,y_train,y_test=train_test_split(...x,y,test_size=4,random_state=4...)>>>x_trainarray([[17, 18],       [ 5,  6],       [23, 24],       [ 1,  2],       [ 3,  4],       [11, 12],       [15, 16],       [21, 22]])>>>x_testarray([[ 7,  8],       [ 9, 10],       [13, 14],       [19, 20]])>>>y_trainarray([1, 1, 0, 0, 1, 0, 1, 1])>>>y_testarray([0, 1, 0, 0])

With this change, you get a different result from before. Earlier, you had a training set with nine items and a test set with three items. Now, thanks to the argumenttest_size=4, the training set has eight items and the test set has four items. You’d get the same result withtest_size=0.33 because 33 percent of twelve is approximately four.

There’s one more very important difference between the last two examples: You now get the same result each time you run the function. This is because you’ve fixed the random number generator withrandom_state=4.

The figure below shows what’s going on when you calltrain_test_split():

mmst-train-test-split-1

The samples of the dataset are shuffled randomly and then split into the training and test sets according to the size you defined.

You can see thaty has six zeros and six ones. However, the test set has three zeros out of four items. If you want to (approximately) keep the proportion ofy values through the training and test sets, then passstratify=y. This will enable stratified splitting:

Python
>>>x_train,x_test,y_train,y_test=train_test_split(...x,y,test_size=0.33,random_state=4,stratify=y...)>>>x_trainarray([[21, 22],       [ 1,  2],       [15, 16],       [13, 14],       [17, 18],       [19, 20],       [23, 24],       [ 3,  4]])>>>x_testarray([[11, 12],       [ 7,  8],       [ 5,  6],       [ 9, 10]])>>>y_trainarray([1, 0, 1, 0, 1, 0, 0, 1])>>>y_testarray([0, 0, 1, 1])

Nowy_train andy_test have the same ratio of zeros and ones as the originaly array.

Stratified splits are desirable in some cases, like when you’re classifying animbalanced dataset, which is a dataset with a significant difference in the number of samples that belong to distinct classes.

Finally, you can turn off data shuffling and random split withshuffle=False:

Python
>>>x_train,x_test,y_train,y_test=train_test_split(...x,y,test_size=0.33,shuffle=False...)>>>x_trainarray([[ 1,  2],       [ 3,  4],       [ 5,  6],       [ 7,  8],       [ 9, 10],       [11, 12],       [13, 14],       [15, 16]])>>>x_testarray([[17, 18],       [19, 20],       [21, 22],       [23, 24]])>>>y_trainarray([0, 1, 1, 0, 1, 0, 0, 1])>>>y_testarray([1, 0, 1, 0])

Now you have a split in which the first two-thirds of samples in the originalx andy arrays are assigned to the training set and the last third to the test set. No shuffling. No randomness.

Supervised Machine Learning Withtrain_test_split()

Now it’s time to seetrain_test_split() in action when solving supervised learning problems. You’ll start with a small regression problem that can be solved with linear regression before looking at a bigger problem. You’ll also see that you can usetrain_test_split() for classification as well.

Minimalist Example of Linear Regression

In this example, you’ll apply what you’ve learned so far to solve a small regression problem. You’ll learn how to create datasets, split them into training and test subsets, and use them for linear regression.

As always, you’ll start by importing the necessary packages, functions, or classes. You’ll need NumPy,LinearRegression, andtrain_test_split():

Python
>>>importnumpyasnp>>>fromsklearn.linear_modelimportLinearRegression>>>fromsklearn.model_selectionimporttrain_test_split

Now that you’ve imported everything you need, you can create two small arrays,x andy, to represent the observations and then split them into training and test sets just as you did before:

Python
>>>x=np.arange(20).reshape(-1,1)>>>y=np.array([5,12,11,19,30,29,23,40,51,54,74,...62,68,73,89,84,89,101,99,106])>>>xarray([[ 0],       [ 1],       [ 2],       [ 3],       [ 4],       [ 5],       [ 6],       [ 7],       [ 8],       [ 9],       [10],       [11],       [12],       [13],       [14],       [15],       [16],       [17],       [18],       [19]])>>>yarray([  5,  12,  11,  19,  30,  29,  23,  40,  51,  54,  74,  62,  68,        73,  89,  84,  89, 101,  99, 106])>>>x_train,x_test,y_train,y_test=train_test_split(...x,y,test_size=8,random_state=0...)

Your dataset has twenty observations, orx-y pairs. You specify the argumenttest_size=8, so the dataset is divided into a training set with twelve observations and a test set with eight observations.

Now you can use the training set to fit the model:

Python
>>>model=LinearRegression().fit(x_train,y_train)>>>model.intercept_np.float64(3.1617195496417523)>>>model.coef_array([5.53121801])

LinearRegression creates the object that represents the model, while.fit() trains, or fits, the model and returns it. With linear regression, fitting the model means determining the best intercept (model.intercept_) and slope (model.coef_) values of the regression line.

Although you can usex_train andy_train to check the goodness of fit, this isn’t a best practice. An unbiased estimation of the predictive performance of your model is based on test data:

Python
>>>model.score(x_train,y_train)0.9868175024574795>>>model.score(x_test,y_test)0.9465896927715023

.score() returns thecoefficient of determination, orR², for the data passed. Its maximum is1. The higher theR² value, the better the fit. In this case, the training data yields a slightly higher coefficient. However, theR² calculated with test data is an unbiased measure of your model’s prediction performance.

This is how it looks on a graph:

mmst-train-test-split-2

The green dots represent thex-y pairs used for training. The black line, called theestimated regression line, is defined by the results of model fitting: the intercept and the slope. So, it reflects the positions of the green dots only.

The white dots represent the test set. You use them to estimate the performance of the model (regression line) with data not used for training.

Regression Example

Now you’re ready to split a larger dataset to solve a regression problem. You’ll use theCalifornia Housing dataset, which is included insklearn. This dataset has 20640 samples, eight input variables, and the house values as the output. You can retrieve it withsklearn.datasets.fetch_california_housing().

First, importtrain_test_split() andfetch_california_housing():

Python
>>>fromsklearn.datasetsimportfetch_california_housing>>>fromsklearn.model_selectionimporttrain_test_split

Now that you have both functions imported, you can get the data you’ll work with:

Python
>>>x,y=fetch_california_housing(return_X_y=True)

As you can see,fetch_california_housing() with the argumentreturn_X_y=True returns atuple with two NumPy arrays:

  1. A two-dimensional array with the inputs
  2. A one-dimensional array with the outputs

The next step is to split the data the same way as before:

Python
>>>x_train,x_test,y_train,y_test=train_test_split(...x,y,test_size=0.4,random_state=0...)

Now you have the training and test sets. The training data is contained inx_train andy_train, while the data for testing is inx_test andy_test.

When you work with larger datasets, it’s usually more convenient to pass the training or test size as a ratio.test_size=0.4 means that approximately 40 percent of samples will be assigned to the test data, and the remaining 60 percent will be assigned to the training data.

Finally, you can use the training set (x_train andy_train) to fit the model and the test set (x_test andy_test) for an unbiased evaluation of the model. In this example, you’ll apply three well-known regression algorithms to create models that fit your data:

  1. Linear regression withLinearRegression()
  2. Gradient boosting withGradientBoostingRegressor()
  3. Random forest withRandomForestRegressor()

The process is pretty much the same as with the previous example:

  1. Import the classes you need.
  2. Create model instances using these classes.
  3. Fit the model instances with.fit() using the training set.
  4. Evaluate the model with.score() using the test set.

Here’s the code that follows the steps described above for all three regression algorithms:

Python
>>>fromsklearn.linear_modelimportLinearRegression>>>model=LinearRegression().fit(x_train,y_train)>>>model.score(x_train,y_train)0.6105322745695656>>>model.score(x_test,y_test)0.5982535501446862>>>fromsklearn.ensembleimportGradientBoostingRegressor>>>model=GradientBoostingRegressor(random_state=0).fit(x_train,y_train)>>>model.score(x_train,y_train)0.8083859166342285>>>model.score(x_test,y_test)0.7802104901623703>>>fromsklearn.ensembleimportRandomForestRegressor>>>model=RandomForestRegressor(random_state=0).fit(x_train,y_train)>>>model.score(x_train,y_train)0.9727449572570027>>>model.score(x_test,y_test)0.7933138227558006

You’ve used your training and test datasets to fit three models and evaluate their performance. The measure of accuracy obtained with.score() is the coefficient of determination. It can be calculated with either the training or test set. However, as you already learned, the score obtained with the test set represents an unbiased estimation of performance.

As mentioned in the documentation, you can provide optional arguments toLinearRegression(),GradientBoostingRegressor(), andRandomForestRegressor().GradientBoostingRegressor() andRandomForestRegressor() use therandom_state parameter for the same reason thattrain_test_split() does: to deal with randomness in the algorithms and ensure reproducibility.

For some methods, you may also needfeature scaling. In such cases, you should fit the scalers with training data and use them to transform test data.

Classification Example

You can usetrain_test_split() to solveclassification problems the same way you do for regression analysis. In machine learning, classification problems involve training a model to apply labels to, or classify, the input values and sort your dataset into categories.

In the tutorialLogistic Regression in Python, you’ll find an example of ahandwriting recognition task. The example provides another demonstration of splitting data into training and test sets to avoid bias in the evaluation process.

Other Validation Functionalities

The packagesklearn.model_selection offers a lot of functionalities related to model selection and validation, including the following:

  • Cross-validation
  • Learning curves
  • Hyperparameter tuning

Cross-validation is a set of techniques that combine the measures of prediction performance to get more accurate model estimations.

One of the widely used cross-validation methods isk-fold cross-validation. In it, you divide your dataset intok (often five or ten) subsets, orfolds, of equal size and then perform the training and test proceduresk times. Each time, you use a different fold as the test set and all the remaining folds as the training set. This providesk measures of predictive performance, and you can then analyze their mean and standard deviation.

You can implement cross-validation withKFold,StratifiedKFold,LeaveOneOut, and a few other classes and functions fromsklearn.model_selection.

Alearning curve, sometimes called a training curve, shows how the prediction score of training and validation sets depends on the number of training samples. You can uselearning_curve() to get this dependency, which can help you find the optimal size of the training set, choose hyperparameters, compare models, and so on.

Hyperparameter tuning, also called hyperparameter optimization, is the process of determining the best set of hyperparameters to define your machine learning model.sklearn.model_selection provides you with several options for this purpose, includingGridSearchCV,RandomizedSearchCV,validation_curve(), and others. Splitting your data is also important for hyperparameter tuning.

Conclusion

You now know why and how to usetrain_test_split() fromsklearn. You’ve learned that, for an unbiased estimation of the predictive performance of machine learning models, you should use data that hasn’t been used for model fitting. That’s why you need to split your dataset into training, test, and in some cases, validation subsets.

In this tutorial, you’ve learned how to:

  • Usetrain_test_split() to get training and test sets
  • Control the size of the subsets with the parameterstrain_size andtest_size
  • Determine therandomness of your splits with therandom_state parameter
  • Obtainstratified splits with thestratify parameter
  • Usetrain_test_split() as a part ofsupervised machine learning procedures

You’ve also seen that thesklearn.model_selection module offers several other tools for model validation, including cross-validation, learning curves, and hyperparameter tuning.

If you have questions or comments, then please put them in the comment section below.

Get Your Code:Click here to download the free sample code that you’ll use to learn about splitting your dataset with scikit-learn’s train_test_split().

Take the Quiz: Test your knowledge with our interactive “Split Your Dataset With scikit-learn's train_test_split()” quiz. You’ll receive a score upon completion to help you track your learning progress:


Split Your Dataset With scikit-learn's train_test_split()

Interactive Quiz

Split Your Dataset With scikit-learn's train_test_split()

In this quiz, you'll test your understanding of how to use the train_test_split() function from the scikit-learn library to split your dataset into subsets for unbiased evaluation in machine learning.

Frequently Asked Questions

Now that you have some experience with scikit-learn’strain_test_split(), you can use the questions and answers below to check your understanding and recap what you’ve learned.

These FAQs are related to the most important concepts you’ve covered in this tutorial. Click theShow/Hide toggle beside each question to reveal the answer.

train_test_split() is a function from scikit-learn that you use to split your dataset into training and test subsets, which helps you perform unbiased model evaluation and validation.

x_train andy_train are the parts of your dataset that you use to train—or fit—your machine learning model.x_train contains the input data, whiley_train contains the corresponding output labels.

When you settest_size=0.2 intrain_test_split(), you specify that 20% of your dataset should be used as the test set for evaluating your model, with the remaining 80% used for training.

Yes,train_test_split() can handle imbalanced datasets by using thestratify parameter, which ensures that the class distribution in the training and test sets matches the original dataset.

Recommended Course

Splitting Datasets With scikit-learn and train_test_split()(33m)

🐍 Python Tricks 💌

Get a short & sweetPython Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.

Python Tricks Dictionary Merge

AboutMirko Stojiljković

Mirko has a Ph.D. in Mechanical Engineering and works as a university professor. He is a Pythonista who applies hybrid optimization and machine learning methods to support decision making in the energy sector.

» More about Mirko

Each tutorial at Real Python is created by a team of developers so that it meets our high quality standards. The team members who worked on this tutorial are:

MasterReal-World Python Skills With Unlimited Access to Real Python

Locked learning resources

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

Level Up Your Python Skills »

MasterReal-World Python Skills
With Unlimited Access to Real Python

Locked learning resources

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

Level Up Your Python Skills »

What Do You Think?

Rate this article:

What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment below and let us know.

Commenting Tips: The most useful comments are those written with the goal of learning from or helping out other students.Get tips for asking good questions andget answers to common questions in our support portal.


Looking for a real-time conversation? Visit theReal Python Community Chat or join the next“Office Hours” Live Q&A Session. Happy Pythoning!

Keep Learning

Related Topics:intermediatedata-sciencemachine-learningnumpy

Related Learning Paths:

Related Courses:

Related Tutorials:

Keep reading Real Python by creating a free account or signing in:

Already have an account?Sign-In

Almost there! Complete this form and click the button below to gain instant access:

Split Your Dataset With scikit-learn's train_test_split()

Split Your Dataset With scikit-learn's train_test_split() (Sample Code)

🔒 No spam. We take your privacy seriously.


[8]ページ先頭

©2009-2026 Movatter.jp