-
Notifications
You must be signed in to change notification settings - Fork 796
[Benchmarks] Add benchmarks scripts for Torch MultiQueue benchmarks #20665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: sycl
Are you sure you want to change the base?
Conversation
c6c8756 to
7e5023c
Compare
lslusarczyk
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it test passes it is OK to me, but @PatKamin may see more, let's wait for his review too
| ), | ||
| ] | ||
|
|
||
| for runtime in RUNTIMES: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cosmetic: you can use filter instead of nested if
>>> a=[1,2,3,4,5]
>>> for i in filter(lambda x: x != 3, a):
... print(i)
...
1
2
4
5
| def get_tags(self): | ||
| return [runtime_to_tag_name(self._runtime)] | ||
|
|
||
| def name(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, follow the order of methods as in the base class for consistency: https://github.com/intel/llvm/blob/sycl/devops/scripts/benchmarks/CONTRIB.md#benchmark-class-structure
| return f"{self.explicit_group()}_{self._runtime.value}" | ||
|
|
||
| def get_tags(self): | ||
| return [runtime_to_tag_name(self._runtime)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lslusarczyk , perhaps we could use a new "pytorch" tag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lslusarczyk , perhaps we could use a new "pytorch" tag?
yes, please
| return f"{self._test} {self._variant_name}" | ||
|
|
||
| def display_name(self) -> str: | ||
| return f"{self.explicit_group()}_{self._runtime.value}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be human-readable, no need for underscores. This is the name that gets printed on charts' titles.
| def test_torch_sycl(self): | ||
| self._checkCase( | ||
| "torch_benchmark_sycl kernelsPerQueue 20, workgroupCount 4096, workgroupSize 512", | ||
| "KernelSubmitMultiQueue large", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: You could test all three input cases (large, medium, small) in these three tests instead of a single one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point
That is, not to add cases but
- in test_torch_sycl use large
- in test_torch_l0 use medium
- in third one use small
|
Could you share a screenshot of an example chart? |
| kernelsPerQueue=4, | ||
| ), | ||
| ] | ||
| # Add UR-specific benchmarks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls add an empty line before this comment
| ), | ||
| ] | ||
|
|
||
| for runtime in RUNTIMES: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls add a short comment to this section saying what bench you're adding (see other sections)
BTW, we're missing such a comment for RecordAndReplay bench, can you please add there as well?
07005cc to
d9c2528
Compare
03e4181 to
b7aea54
Compare
Created scripts to port PyTorch MultiQueue benchmarks