Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Intel GPU to Fast Neural Style example #1318

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 6 additions & 4 deletions fast_neural_style/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,9 @@ python neural_style/neural_style.py eval --content-image </path/to/content/image
- `--model`: saved model to be used for stylizing the image (eg: `mosaic.pth`)
- `--output-image`: path for saving the output image.
- `--content-scale`: factor for scaling down the content image if memory is an issue (eg: value of 2 will halve the height and width of content-image)
- `--cuda`: set it to 1 for running on GPU, 0 for CPU.
- `--mps`: set it to 1 for running on macOS GPU
- `--cuda 0|1`: set it to 1 for running on GPU, 0 for CPU.
- `--mps`: use MPS device backend.
- `--xpu`: use XPU device backend.

Train model

Expand All @@ -40,8 +41,9 @@ There are several command line arguments, the important ones are listed below
- `--dataset`: path to training dataset, the path should point to a folder containing another folder with all the training images. I used COCO 2014 Training images dataset [80K/13GB] [(download)](https://cocodataset.org/#download).
- `--style-image`: path to style-image.
- `--save-model-dir`: path to folder where trained model will be saved.
- `--cuda`: set it to 1 for running on GPU, 0 for CPU.
- `--mps`: set it to 1 for running on macOS GPU
- `--cuda 0|1`: set it to 1 for running on GPU, 0 for CPU.
- `--mps`: use MPS device backend.
- `--xpu`: use XPU device backend.

Refer to `neural_style/neural_style.py` for other command line arguments. For training new models you might have to tune the values of `--content-weight` and `--style-weight`. The mosaic style model shown above was trained with `--content-weight 1e5` and `--style-weight 1e10`. The remaining 3 models were also trained with similar order of weight parameters with slight variation in the `--style-weight` (`5e10` or `1e11`).

Expand Down
19 changes: 18 additions & 1 deletion fast_neural_style/neural_style/neural_style.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,13 @@ def train(args):
device = torch.device("cuda")
elif args.mps:
device = torch.device("mps")
elif args.xpu:
device = torch.device("xpu")
else:
device = torch.device("cpu")

print("Device to use: ", device)

np.random.seed(args.seed)
torch.manual_seed(args.seed)

Expand Down Expand Up @@ -126,6 +130,9 @@ def train(args):

def stylize(args):
device = torch.device("cuda" if args.cuda else "cpu")
device = torch.device("xpu" if args.xpu else "cpu")

print("Device to use: ", device)

content_image = utils.load_image(args.content_image, scale=args.content_scale)
content_transform = transforms.Compose([
Expand Down Expand Up @@ -219,6 +226,10 @@ def main():
help="number of images after which the training loss is logged, default is 500")
train_arg_parser.add_argument("--checkpoint-interval", type=int, default=2000,
help="number of batches after which a checkpoint of the trained model will be created")
train_arg_parser.add_argument('--mps', action='store_true',
help='enable macOS GPU training')
train_arg_parser.add_argument('--xpu', action='store_true',
help='enable Intel XPU training')

eval_arg_parser = subparsers.add_parser("eval", help="parser for evaluation/stylizing arguments")
eval_arg_parser.add_argument("--content-image", type=str, required=True,
Expand All @@ -233,7 +244,11 @@ def main():
help="set it to 1 for running on cuda, 0 for CPU")
eval_arg_parser.add_argument("--export_onnx", type=str,
help="export ONNX model to a given file")
eval_arg_parser.add_argument('--mps', action='store_true', default=False, help='enable macOS GPU training')
eval_arg_parser.add_argument('--mps', action='store_true',
help='enable macOS GPU evaluation')
eval_arg_parser.add_argument('--xpu', action='store_true',
help='enable Intel XPU evaluation')


args = main_arg_parser.parse_args()

Expand All @@ -245,6 +260,8 @@ def main():
sys.exit(1)
if not args.mps and torch.backends.mps.is_available():
print("WARNING: mps is available, run with --mps to enable macOS GPU")
if not args.xpu and torch.xpu.is_available():
print("WARNING: XPU is available, run with --xpu to enable Intel XPU")

if args.subcommand == "train":
check_paths(args)
Expand Down