[KYUUBI #1950] Remove ambiguous SPARK_HADOOP_VERSION
<!-- Thanks for sending a pull request! Here are some tips for you: 1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/contributions.html 2. If the PR is related to an issue in https://github.com/apache/incubator-kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'. 3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'. --> ### _Why are the changes needed?_ <!-- Please clarify why the changes are needed. For instance, 1. If you add a feature, you can talk about the use case of it. 2. If you fix a bug, you can clarify why it is a bug. --> The original idea of SPARK_HADOOP_VERSION is used to concat spark release names only, now we need to remove it as - SPARK_HADOOP_VERSION is misunderstood by developers and misused somewhere like the one of kyuubi compiled - multi-engine support now - the release names of spark(or something else) are very easy to get through code with different environments, prod/test/dev - A `mvn` job is bundled with `bin/load-kyuubi-env.sh` which is truly worrisome - SPARK_HADOOP_VERSION on spark side hass broken already for spark 3.2 which actually bundled with hadoop 3.3, see https://github.com/apache/spark-website/pull/361#discussion_r730716668 ### _How was this patch tested?_ - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [x] [Run test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests ) locally before make a pull request Closes #1950 from yaooqinn/hadoop. Closes #1950 b47be7c6 [Kent Yao] Remove ambiguous SPARK_HADOOP_VERSION 3b33ee56 [Kent Yao] Remove ambiguous SPARK_HADOOP_VERSION Authored-by:Kent Yao <yao@apache.org> Signed-off-by:
Kent Yao <yao@apache.org>
Showing
+10 -53
Please register or sign in to comment